Proof by Mathematical Induction Complex numbers

1
Proof by Mathematical Induction
Principle of Mathematical Induction (takes three steps)
TASK: Prove that the statement P n is true for all n∈ 𝑡
1. Check that the statement Pn is true for n = 1.
(Or, if the assertion is that the statement is true for n β‰₯ a, prove it for n = a.)
2. Assume that the statement is true for n = k
(inductive hypothesis)
3. Prove that if the statement is true for n = k, then it must also be true for n = k + 1
Complex numbers
∎ π‘Ž + 𝑏𝑖 ,
π‘Žπœ–π‘…,
∎ 𝑧 = π‘Ž + 𝑏𝑖
π‘πœ–π‘…
𝑖 = βˆšβˆ’1
π‘Ž = 𝑅𝑒(𝑧),
∎ 𝑔𝑖𝑣𝑒𝑛 𝑧 = π‘Ž + 𝑏𝑖
∎ 𝑔𝑖𝑣𝑒𝑛 𝑧1 = 𝑒 π‘–πœƒ1
𝑏 = πΌπ‘š(𝑧)
& 𝑀 = 𝑐 + 𝑑𝑖,
& 𝑧2 = 𝑒 𝑖 πœƒ2 ,
π‘Ž + 𝑏𝑖 = 𝑐 + 𝑑𝑖
⟺ π‘Ž=𝑐 & 𝑏=𝑑
𝑧1 = 𝑧2 ⟺ |𝑧1 | = |𝑧2 | & πœƒ1 = πœƒ2 + 2π‘˜πœ‹. π‘˜ = 0, ±1, ±2 …
Presentation of complex number in Cartesian and polar coordinate system
β–ͺ π‘€π‘œπ‘‘π‘’π‘™π‘’π‘  π‘œπ‘Ÿ π΄π‘π‘ π‘œπ‘™π‘’π‘‘π‘’ π‘£π‘Žπ‘™π‘’π‘’: |𝑧| = π‘Ÿ = √π‘₯ 2 + 𝑦 2
𝑦
β–ͺ π΄π‘Ÿπ‘”π‘’π‘šπ‘’π‘›π‘‘: π‘Žπ‘Ÿπ‘” 𝑧 = πœƒ = π‘Žπ‘Ÿπ‘ π‘‘π‘Žπ‘›
(π‘“π‘Ÿπ‘œπ‘š π‘π‘–π‘π‘‘π‘’π‘Ÿπ‘’)
π‘₯
β–ͺ π‘₯ = π‘Ÿπ‘π‘œπ‘  πœƒ, 𝑦 = π‘Ÿπ‘ π‘–π‘› πœƒ
𝒛 = π’™βŸ+ π’šπ’Š = ⏟
𝒓(𝒄𝒐𝒔 𝜽 + π’Š π’”π’Šπ’ 𝜽)
π‘ͺπ’‚π’“π’•π’†π’”π’Šπ’‚π’
π’‡π’π’“π’Ž
π’“βŸ
π‘’π‘–πœƒ
=
𝒑𝒐𝒍𝒂𝒓 π’‡π’π’“π’Ž
π’Žπ’π’…π’–π’π’–π’”βˆ’π’‚π’“π’ˆπ’–π’Žπ’†π’π’•
π’‡π’π’“π’Ž
= 𝒓 𝑐𝑖𝑠 πœƒ
𝑬𝒖𝒍𝒆𝒓 π’‡π’π’“π’Ž
Argan plane is the complex plane
Very useful for fast conversions from
Cartesian into Euler form is β†’
𝑒π‘₯: 𝑧 = – 2 𝑖
β–ͺ π’›πŸ π’›πŸ = [|π’›πŸ |π’†π’Šπœ½πŸ ] [|π’›πŸ |π’†π’Šπœ½πŸ ] = |π’›πŸ ||π’›πŸ |π’†π’Š(𝜽𝟏+𝜽𝟐)
β–ͺ
π’›πŸ
[|π’›πŸ |π’†π’Šπœ½πŸ ] | π’›πŸ | π’Š(𝜽 βˆ’πœ½ )
=
=
𝒆 𝟏 𝟐
π’›πŸ
[|π’›πŸ |π’†π’Šπœ½πŸ ] | π’›πŸ |
β†’
3πœ‹
𝑧 = 2𝑒 𝑖 2
argument is measured
in radians
2
Properties of modulus and argument
β–ͺ |𝑧 βˆ— | = |𝑧|
& π‘Žπ‘Ÿπ‘” (𝑧 βˆ— ) = βˆ’π‘Žπ‘Ÿπ‘” 𝑧
β–ͺ 𝑧𝑧 βˆ— = |𝑧|2
β–ͺ 𝑒 π‘–πœƒ = 𝑒 𝑖(πœƒ+π‘˜2πœ‹)
π‘˜πœ–π‘
De Moivre’s Theorem
𝑧 𝑛 = [π‘Ÿ(π‘π‘œπ‘ πœƒ + 𝑖 π‘ π‘–π‘›πœƒ)]𝑛 = π‘Ÿ 𝑛 (π‘π‘œπ‘ πœƒ + 𝑖 π‘ π‘–π‘›πœƒ)𝑛
𝑛
𝑧 𝑛 = [π‘Ÿπ‘’ π‘–πœƒ ] = π‘Ÿ 𝑛 𝑒 π‘–π‘›πœƒ = π‘Ÿ 𝑛 (π‘π‘œπ‘  π‘›πœƒ + 𝑖 𝑠𝑖𝑛 π‘›πœƒ)
∴ (𝒄𝒐𝒔 π’πœ½ + π’Š π’”π’Šπ’ π’πœ½) = (π’„π’π’”πœ½ + π’Š π’”π’Šπ’πœ½)𝒏
Finding n-th root of a complex number z
𝑧 = |𝒛|𝑒 π‘–πœƒ = |𝒛|𝑒 𝑖(πœƒ+π‘˜2πœ‹)
𝑖
βˆšπ‘§ = √|𝑧| 𝑒
𝑛
𝑛
(πœƒ+π‘˜2πœ‹)
𝑛
πœƒ + π‘˜2πœ‹
πœƒ + π‘˜2πœ‹
𝑛
= √|𝑧| {π‘π‘œπ‘  (
) + 𝑖 𝑠𝑖𝑛 (
)}
𝑛
𝑛
π‘˜ = 0, 1, 2, 3 … , 𝑛 βˆ’ 1
𝑛
Only the values k = 0, 1, …, n - 1 give different values of βˆšπ‘§. There are exactly n nth roots of z.
Geometrically, the n th roots are the vertices
of a regular polygon with n sides in Argan plane.
7
√1
famous nth root of unity is the
solution of equation : 𝑧 𝑛 = 1
Reminder and Conjugate roots of polynomial equations with real
coefficients
Real Polynomials
A real polynomial is a polynomial with only real coefficients
Two polynomials are equal if and only if they have the same degree (order) and corresponding terms have
equal coefficients.
if 2x3 + 3x2 – 4x + 6 = ax3 + bx2 – cx + d then
a = 2, b = 3, c = – 4, d = 6
Remainder
If P(x) is divided by D(x) until a remainder R is obtain:
𝑃(π‘₯)
𝑅(π‘₯)
= 𝑄(π‘₯) +
𝐷(π‘₯)
𝐷(π‘₯)
𝑃(π‘₯) = 𝐷(π‘₯)𝑄(π‘₯) + 𝑅(π‘₯)
D(x) is the divisor
Q(x) is the quotient
R(x) is the remainder
3
When we divide by a polynomial of degree 1 ("ax+b") the remainder will have degree 0 (a constant)
From here to :
The Remainder Theorem
When a polynomial P(x) is divided by (x – k) until a remainder R is obtain, then R = P(x):
proof:
𝑃(π‘₯)
𝑅
= 𝑄(π‘₯) +
π‘₯βˆ’π‘˜
π‘₯βˆ’π‘˜
𝑃(π‘₯) = 𝑄(π‘₯)(π‘₯ βˆ’ π‘˜) + 𝑅
𝑃(π‘˜) = 𝑅
The Factor Theorem
k is a zero of P(x) ⇔ (x – k) is a factor of P(x)
proof:
k is a zero of P(x) ⇔ P(k) = 0
⇔ R=0
as
𝑃(π‘₯) = 𝑄(π‘₯)(π‘₯ βˆ’ π‘˜) + 𝑅 = 𝑄(π‘₯)(π‘₯ βˆ’ π‘˜) =
⇔ (x – k) is a factor of P(x)
Corollar
(x – k) is a factor of P(x) ⇔ there exists a polynomial Q(x) such that P(x) = (x – k) Q(x)
Properties of Real Polynomials
4
Polynomials: Sums and Products of Roots
Let p and q be roots of quadratic equation ax2 + bx + c=0
𝑝 + π‘ž = βˆ’π‘/π‘Ž
π‘π‘ž = 𝑐/π‘Ž
Let p, q and r be roots of quadratic equation ax3 + bx2 + cx + d = 0
𝑝 + π‘ž + π‘Ÿ = βˆ’π‘/π‘Ž
π‘π‘žπ‘Ÿ = βˆ’π‘‘/π‘Ž
Vectors, Lines and Planes
Vector as position vector of point A in three dimensions in Cartesian coordinate system:
π‘Žπ‘₯
π‘Žβƒ— = π‘Žπ‘₯ 𝑖̂ + π‘Žπ‘¦ 𝑗̂ + π‘Žπ‘§ π‘˜Μ‚ ≑ (π‘Žπ‘¦ ) ≑ (π‘Žπ‘₯ π‘Žπ‘¦ π‘Žπ‘§ )
π‘Žπ‘§
Both, position vector of point A and point A have the same coordinates:
π‘Žπ‘₯
π‘Žβƒ— = (π‘Žπ‘¦ ) ,
π‘Žπ‘§
𝐴 = (π‘Žπ‘₯ , π‘Žπ‘¦ , π‘Žπ‘§ )
𝑖̂, 𝑗̂ π‘Žπ‘›π‘‘ π‘˜Μ‚ are unit vectors in x, y and z directions.
1
𝑖̂ = (0)
0
0
𝑗̂ = (1)
0
|π‘Žβƒ—| = βˆšπ‘Žπ‘₯2 + π‘Žπ‘¦2 + π‘Žπ‘§2
0
π‘˜Μ‚ = (0)
1
|π‘Žβƒ—| 𝑖𝑠 π‘π‘Žπ‘™π‘™π‘’π‘‘ π‘šπ‘Žπ‘”π‘›π‘–π‘‘π‘’π‘‘π‘’, π‘™π‘’π‘›π‘”π‘‘β„Ž, π‘šπ‘œπ‘‘π‘’π‘™π‘’π‘  π‘œπ‘Ÿ π‘›π‘œπ‘Ÿπ‘š
Unit vector A unit vector is a vector whose length is 1. It gives direction only!
π‘ŽΜ‚ =
π‘Žπ‘₯
π‘Žπ‘₯ 𝑖̂ + π‘Žπ‘¦ 𝑗̂ + π‘Žπ‘§ π‘˜Μ‚
π‘Žβƒ—
1
=
=
(π‘Žπ‘¦ )
|π‘Žβƒ—|
βˆšπ‘Žπ‘₯2 + π‘Žπ‘¦2 + π‘Žπ‘§2
βˆšπ‘Žπ‘₯2 + π‘Žπ‘¦2 + π‘Žπ‘§2 π‘Žπ‘§
Vector between two points
π‘₯𝐡 βˆ’ π‘₯𝐴
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—
𝐴𝐡 = (𝑦𝐡 βˆ’ 𝑦𝐴 ) = (π‘₯𝐡 βˆ’ π‘₯𝐴 ) 𝑖̂ + (𝑦𝐡 βˆ’ 𝑦𝐴 ) 𝑗̂ + (𝑧𝐡 βˆ’ 𝑧𝐴 ) π‘˜Μ‚
𝑧𝐡 βˆ’ 𝑧𝐴
π‘₯𝐴 βˆ’ π‘₯𝐡
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—
𝑦
𝐡𝐴 = ( 𝐴 βˆ’ 𝑦𝐡 ) = (π‘₯𝐴 βˆ’ π‘₯𝐡 ) 𝑖̂ + (𝑦𝐴 βˆ’ 𝑦𝐡 ) 𝑗̂ + (𝑧𝐴 βˆ’ 𝑧𝐡 ) π‘˜Μ‚
𝑧𝐴 βˆ’ 𝑧𝐡
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ— | = |𝐡𝐴
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—| = √(π‘₯𝐡 βˆ’ π‘₯𝐴 )2 + (𝑦𝐡 βˆ’ 𝑦𝐴 )2 + (𝑧𝐡 βˆ’ 𝑧𝐴 )2
π‘šπ‘œπ‘‘π‘’π‘™π‘’π‘  ≑ π‘™π‘’π‘›π‘”π‘‘β„Ž: |𝐴𝐡
5
Parallel and Collinear Vectors
π‘Žβƒ— 𝑖𝑠 𝒑𝒂𝒓𝒂𝒍𝒍𝒆𝒍 π‘‘π‘œ 𝑏⃗⃗ ⇔ π‘Žβƒ— = π‘˜π‘βƒ—βƒ—
π‘˜πœ€π‘…
π‘ƒπ‘œπ‘–π‘›π‘‘π‘  π‘Žπ‘Ÿπ‘’ π‘π‘œπ‘™π‘™π‘–π‘›π‘’π‘Žπ‘Ÿ 𝑖𝑓 π‘‘β„Žπ‘’π‘¦ 𝑙𝑖𝑒 π‘œπ‘› π‘‘β„Žπ‘’ π‘ π‘Žπ‘šπ‘’ 𝑙𝑖𝑛𝑒
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ— π‘“π‘œπ‘Ÿ π‘ π‘œπ‘šπ‘’ π‘ π‘π‘Žπ‘™π‘Žπ‘Ÿ π‘˜
𝐴, 𝐡 π‘Žπ‘›π‘‘ 𝐢 π‘Žπ‘Ÿπ‘’ π‘π‘œπ‘™π‘™π‘–π‘›π‘’π‘Žπ‘Ÿ ⇔ βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—
𝐴𝐡 = π‘˜π΄πΆ
(π‘œπ‘›π‘’ π‘π‘œπ‘šπ‘šπ‘œπ‘› π‘π‘œπ‘–π‘›π‘‘ π‘Žπ‘›π‘‘ π‘‘β„Žπ‘’ π‘ π‘Žπ‘šπ‘’ π‘‘π‘–π‘Ÿπ‘’π‘π‘‘π‘–π‘œπ‘›)
The Division of a Line Segment
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—: 𝑋𝐡
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ— = π‘Ž : 𝑏
Μ…Μ…Μ…Μ… in the ratio π‘Ž: 𝑏 means 𝐴𝑋
X divides [AB]≑ AB
Internal Division:
P divides [AB] internally in ratio 1:3. Find P
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—: 𝑃𝐡
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ— =
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ— = 1: 3 β†’ 𝐴𝑃
𝐴𝑃
1
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—
𝐴𝐡
4
External Division:
X divide [AB] externally in ratio 2:1. Find Q
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ— = 𝐴𝐡
βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—
𝐡𝑄
Dot / Scalar Product (Scalar, can be ± )
π‘Žβƒ— β€’ 𝑏⃗⃗ = 𝑏⃗⃗ β€’ π‘Žβƒ— = |π‘Žβƒ—||𝑏⃗⃗| cos πœƒ
Properties of dot product
π‘Žβƒ— β€’ 𝑏⃗⃗ = 𝑏⃗⃗ β€’ π‘Žβƒ—
𝑖𝑓 π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ π‘Žπ‘Ÿπ‘’ π‘π‘Žπ‘Ÿπ‘Žπ‘™π‘™π‘’π‘™, π‘‘β„Žπ‘’π‘› π‘Žβƒ— β€’ 𝑏⃗⃗ = |π‘Žβƒ—||𝑏⃗⃗|
𝑖𝑓 π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ π‘Žπ‘Ÿπ‘’ π‘Žπ‘›π‘‘π‘–π‘π‘Žπ‘Ÿπ‘Žπ‘™π‘™π‘’π‘™, π‘‘β„Žπ‘’π‘› π‘Žβƒ— β€’ 𝑏⃗⃗ = βˆ’ |π‘Žβƒ—||𝑏⃗⃗|
2
π‘Žβƒ— β€’ π‘Žβƒ— = |π‘Žβƒ— |
(π‘Žβƒ— + 𝑏⃗⃗) β€’ (𝑐⃗ + 𝑑⃗) = π‘Žβƒ— β€’ 𝑐⃗ + π‘Žβƒ— β€’ 𝑑⃗ + 𝑏⃗⃗ β€’ 𝑐⃗ + 𝑏⃗⃗ β€’ 𝑑⃗
π‘Žβƒ— β€’ 𝑏⃗⃗ = 0 (π‘Žβƒ— β‰  0, 𝑏⃗⃗ β‰  0) ↔ π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ π‘Žπ‘Ÿπ‘’ π‘π‘’π‘Ÿπ‘π‘’π‘›π‘‘π‘–π‘π‘’π‘™π‘Žπ‘Ÿ
𝑖̂ β€’ 𝑖̂ = 1
𝑗̂ β€’ 𝑗̂ = 1
π‘˜Μ‚ β€’ π‘˜Μ‚ = 1 & 𝑖̂ β€’ 𝑗̂ = 0
𝑖̂ β€’ π‘˜Μ‚ = 0
𝑗̂ β€’ π‘˜Μ‚ = 0
𝑏π‘₯
π‘Žπ‘₯
In Cartesian coordinates: π‘Žβƒ— β€’ 𝑏⃗⃗ = (π‘Žπ‘¦ ) (𝑏𝑦 ) = π‘Žπ‘₯ 𝑏π‘₯ + π‘Žπ‘¦ 𝑏𝑦 + π‘Žπ‘§ 𝑏𝑧
π‘Žπ‘§
𝑏𝑧
Cross / Vector Product (vector)
The magnitude of the vector π‘Žβƒ— × π‘βƒ—βƒ— is equal
to the area determined by both vectors.
|π‘Žβƒ— × π‘βƒ—βƒ—| = |π‘Žβƒ—||𝑏⃗⃗| 𝑠𝑖𝑛 πœƒ
Direction of the vector π‘Žβƒ— × π‘βƒ—βƒ— is given by right hand rule.
6
Properties of vector/cross product
π‘Žβƒ— × π‘βƒ—βƒ— = βˆ’ 𝑏⃗⃗ × π‘Žβƒ—
𝑖𝑓 π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ π‘Žπ‘Ÿπ‘’ π‘π‘’π‘Ÿπ‘π‘’π‘›π‘‘π‘–π‘π‘’π‘™π‘Žπ‘Ÿ, π‘‘β„Žπ‘’π‘› |π‘Žβƒ— × π‘βƒ—βƒ—| = |π‘Žβƒ—||𝑏⃗⃗|
(π‘Žβƒ— + 𝑏⃗⃗) × (𝑐⃗ + 𝑑⃗) = π‘Žβƒ— × π‘βƒ— + π‘Žβƒ— × π‘‘βƒ— + 𝑏⃗⃗ × π‘βƒ— + 𝑏⃗⃗ × π‘‘βƒ—
π‘Žβƒ— × π‘βƒ—βƒ— = 0 (π‘Žβƒ— β‰  0, 𝑏⃗⃗ β‰  0) ↔ π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ π‘Žπ‘Ÿπ‘’ π‘π‘Žπ‘Ÿπ‘Žπ‘™π‘™π‘’π‘™
πΉπ‘œπ‘Ÿ π‘π‘Žπ‘Ÿπ‘Žπ‘™π‘™π‘’π‘™ π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘  π‘‘β„Žπ‘’ π‘£π‘’π‘π‘‘π‘œπ‘Ÿ π‘π‘Ÿπ‘œπ‘‘π‘’π‘π‘‘ 𝑖𝑠 0.
i×i=j× j=k×k=0
β‡’
i× j=k
π‘Žπ‘¦ 𝑏𝑧 βˆ’ π‘Žπ‘§ 𝑏𝑦
𝑖̂
βƒ—βƒ—
π‘Ž
π‘Žβƒ— × π‘ = ( π‘Žπ‘§ 𝑏π‘₯ βˆ’ π‘Žπ‘₯ 𝑏𝑧 ) = | π‘₯
π‘Žπ‘₯ 𝑏𝑦 βˆ’ π‘Žπ‘¦ 𝑏π‘₯
𝑏π‘₯
𝑗̂
π‘Žπ‘¦
𝑏𝑦
j×k=i
k×i = j
π‘˜Μ‚
π‘Žπ‘§ |
𝑏𝑧
How do we use dot and cross product
β€’ To find angle between vectors the easiest way is to use dot product, not vector product.
● The angle between two vectors
πœƒ = π‘Žπ‘Ÿπ‘π‘π‘œπ‘ 
βƒ—π‘Ž
βƒ—βƒ— β€’ βƒ—βƒ—βƒ—
𝑏
βƒ—βƒ—βƒ—||βƒ—βƒ—βƒ—
𝑏|
|π‘Ž
Angle between vectors can be acute or obtuse
● The angle between two lines
πœƒ = π‘Žπ‘Ÿπ‘π‘π‘œπ‘ 
βƒ—βƒ—βƒ— β€’ βƒ—βƒ—βƒ—
|π‘Ž
𝑏|
βƒ—βƒ—βƒ—||βƒ—βƒ—βƒ—
𝑏|
|π‘Ž
Angle between lines is by definition acute
π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ π‘Žπ‘Ÿπ‘’ π‘‘π‘–π‘Ÿπ‘’π‘π‘‘π‘–π‘œπ‘› π‘£π‘’π‘π‘‘π‘œπ‘Ÿπ‘ 
β–ͺ Dot product of perpendicular vectors is zero.
β–ͺ To show that two lines are perpendicular use the dot product with line direction vectors.
● The angle between a line and a plane
|𝑛⃗⃗ β€’ 𝑑⃗|
|𝑛⃗⃗||𝑑⃗|
|𝑛⃗⃗ β€’ 𝑑⃗|
πœƒ = π‘Žπ‘Ÿπ‘ sin
|𝑛⃗⃗||𝑑⃗|
𝑠𝑖𝑛 πœƒ = π‘π‘œπ‘  πœ™ =
● The angle between two planes is the same as the angle between
their two normal vectors
πœƒ = π‘Žπ‘Ÿπ‘ π‘π‘œπ‘ 
|𝑛⃗⃗ β€’ π‘š
βƒ—βƒ—βƒ—|
|𝑛⃗⃗||π‘š
βƒ—βƒ—βƒ—|
7
β–ͺ To show that two planes are perpendicular use the dot product on their normal vectors.
β–ͺ To find all vectors perpendicular to both π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ 𝑓𝑖𝑛𝑑 k (π‘Žβƒ— × π‘βƒ—βƒ—), π‘˜ πœ– 𝑅
β–ͺ To find the unit vector perpendicular to both π‘Žβƒ— π‘Žπ‘›π‘‘ 𝑏⃗⃗ 𝑓𝑖𝑛𝑑
βƒ—βƒ—
π‘Žβƒ—βƒ—× π‘
βƒ—βƒ—|
|π‘Žβƒ—βƒ—× π‘
Coplanar four points
πΉπ‘œπ‘’π‘Ÿ π‘π‘œπ‘–π‘›π‘‘π‘  π‘Žπ‘Ÿπ‘’ π‘π‘œπ‘π‘™π‘Žπ‘›π‘Žπ‘Ÿ 𝑖𝑓 π‘Žπ‘›π‘‘ π‘œπ‘›π‘™π‘¦ 𝑖𝑓 π‘‘β„Žπ‘’ π‘£π‘œπ‘™π‘’π‘šπ‘’ π‘œπ‘“ π‘‘β„Žπ‘’ π‘‘π‘’π‘‘π‘Ÿπ‘Žβ„Žπ‘’π‘‘π‘Ÿπ‘œπ‘› 𝑑𝑒𝑓𝑖𝑛𝑒𝑑 𝑏𝑦 π‘‘β„Žπ‘’π‘š 𝑖𝑠 0:
1
Volume of a tetrahedron = 6 scalar triple product
𝑐π‘₯
1
1 π‘Ž
βƒ—βƒ—
𝑉 = |𝑐⃗ ● ( π‘Žβƒ— × π‘)| = β€– π‘₯
6
6 𝑏
π‘₯
𝑐𝑦
π‘Žπ‘¦
𝑏𝑦
𝑐𝑧
π‘Žπ‘§ β€– 𝑒𝑛𝑖𝑑𝑠 3
𝑏𝑧
Lines
A plane is completely determined by two points, what can be translated into a fixed point A and one
direction vectors
● Vector equation of a line
βƒ—βƒ— of any general point P on the line passing through point A
The position vector 𝒓
and having direction vector 𝑏⃗⃗ is given by the equation
π‘Ÿβƒ— = (π‘Ž1 𝑖̂ + π‘Ž2 𝑗̂ + π‘Ž3 π‘˜Μ‚ ) + πœ†(𝑏1 𝑖̂ + 𝑏2 𝑗̂ + 𝑏3 π‘˜Μ‚)
π‘œπ‘Ÿ
π‘Ž1
𝑏1
π‘₯
(𝑦) = (π‘Ž2 ) + πœ† (𝑏2 )
π‘Ž3
𝑧
𝑏3
● Parametric equation of a line – Ξ» is called a parameter Ξ» ∈ 𝑅
π‘Ž1
𝑏1
π‘₯
(𝑦) = (π‘Ž2 ) + πœ† (𝑏2 )
π‘Ž3
𝑧
𝑏3
β‡’
π‘₯ = π‘Ž1 + πœ†π‘1
𝑦 = π‘Ž2 + πœ†π‘2
𝑧 = π‘Ž3 + πœ†π‘3
● Cartesian equation of a line
π‘₯ = π‘Ž1 + πœ†π‘1 ⟹ πœ† = (π‘₯ βˆ’ π‘Ž1 )/𝑏1
𝑦 = π‘Ž2 + πœ†π‘2 ⟹ πœ† = (𝑦 βˆ’ π‘Ž2 )/𝑏2
𝑧 = π‘Ž3 + πœ†π‘3 ⟹ πœ† = (𝑧 βˆ’ π‘Ž3 )/𝑏3
⟹
π‘₯βˆ’π‘Ž1
𝑏1
=
π‘¦βˆ’π‘Ž2
𝑏2
=
π‘§βˆ’π‘Ž3
𝑏3
(= πœ†)
● Distance from a point P to a line L
Point Q is on the line hence its coordinates must satisfy line equation.
π‘†π‘œπ‘™π‘£π‘’ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› βƒ—βƒ—βƒ—βƒ—βƒ—βƒ—
𝑃𝑄 β€’ 𝑏⃗⃗ = 0 π‘“π‘œπ‘Ÿ πœ†.
πΉπ‘Ÿπ‘œπ‘š π‘‘β„Žπ‘’π‘Ÿπ‘’ 𝑓𝑖𝑛𝑑 π‘π‘œπ‘œπ‘Ÿπ‘‘π‘–π‘›π‘Žπ‘‘π‘’π‘  π‘œπ‘“ π‘π‘œπ‘–π‘›π‘‘ 𝑃 π‘Žπ‘›π‘‘ π‘ π‘’π‘π‘ π‘’π‘žπ‘’π‘’π‘›π‘‘π‘™π‘¦ |𝑃𝑄|
● Relationship between 3 – D lines
● the lines are coplanar (they lie in the same plane). They could be:
β–ͺ intersecting
β–ͺ parallel
β–ͺ coincident
● the lines are not coplanar and are therefore skew (neither parallel nor intersecting)
8
Are the lines
βˆ™ the same?…….check by inspection
βˆ™ parallel?………check by inspection
βˆ™ skew or do they have one point in common?
solving βƒ—βƒ—βƒ—βƒ—
π‘Ÿ1 = βƒ—βƒ—βƒ—βƒ—
π‘Ÿ2 will give 3 equations in  and µ.
Solve two of the equations for  and µ.
if the values of  and µ do not satisfy the third equation then
the lines are skew, and they do not intersect.
If these values do satisfy the three equations then substitute the value
of  or µ into the appropriate line and find the point of intersection.
● Distance between two skew lines π‘Ÿβƒ— = π‘Žβƒ— + πœ† 𝑏⃗⃗ π‘Žπ‘›π‘‘ π‘Ÿβƒ— = 𝑐⃗ + πœ‡ 𝑑⃗
𝑑 = |𝑛̂ β€’ (𝑐⃗ βˆ’ π‘Žβƒ—)|
π‘€β„Žπ‘’π‘Ÿπ‘’
𝑛̂ =
βƒ—βƒ—×𝑑⃗
𝑏
βƒ—βƒ—×𝑑⃗|
|𝑏
Planes
A plane is completely determined by two intersecting lines, what can be translated into a fixed point A
and two nonparallel direction vectors
● Vector equation of a plane
βƒ—βƒ— of any general point P on the plane passing through point A
The position vector 𝒓
and having two direction vector 𝑏⃗⃗ is given by the equation
π‘Ÿβƒ— = π‘Žβƒ— + πœ†π‘βƒ—βƒ— + πœ‡π‘βƒ—
π‘œπ‘Ÿ
π‘Ž1
𝑏1
π‘₯
(𝑦) = (π‘Ž2 ) + πœ† (𝑏2 )
π‘Ž3
𝑧
𝑏3
● Parametric equation of a plane – Ξ», πœ‡ is called a parameters Ξ», πœ‡ ∈ 𝑅
π‘Ž1
𝑐1
𝑏1
π‘₯
(𝑦) = (π‘Ž2 ) + πœ† (𝑏2 ) + πœ‡ (𝑐2 ) β‡’
π‘Ž3
𝑐3
𝑧
𝑏3
π‘₯ = π‘Ž1 + πœ†π‘1 + πœ‡π‘1
𝑦 = π‘Ž2 + πœ†π‘2 + πœ‡π‘2
𝑧 = π‘Ž3 + πœ†π‘3 + πœ‡π‘3
● Normal/Scalar product form of vector equation of a plane
π‘Ÿβƒ— β€’ 𝑛⃗⃗ = π‘Žβƒ— β€’ 𝑛⃗⃗ π‘œπ‘Ÿ 𝑛⃗⃗ β€’ (π‘Ÿβƒ— βˆ’ π‘Žβƒ—) = 0
● Cartesian equation of a plane
𝑛1 π‘₯ + 𝑛2 𝑦 + 𝑛3 𝑧 = 𝑑
𝑑 = 𝑛1 π‘Ž1 + 𝑛2 π‘Ž2 + 𝑛3 π‘Ž3
To convert a vector equation into a Cartesian equation, you find the cross product of the two vectors
appearing in the vector equation to find a normal to the plane and use that to find the Cartesian
equation.
To convert Cartesian -> vector form, you need either two vectors or three points that lie on the plane!
9
So first step is to choose three arbitrary random non-collinear points.
● Distance from origin
𝐷 = |π‘Ÿβƒ— β€’ 𝑛̂| = |π‘Žβƒ— β€’ 𝑛̂| =
|𝑛1 π‘Ž1 +𝑛2 π‘Ž2 +𝑛3 π‘Ž3 |
βˆšπ‘›12 +𝑛22 +𝑛32
● Intersection of a line and a plane
1
4
𝐿𝑖𝑛𝑒 𝐿: π‘Ÿβƒ— = (βˆ’2) + πœ‡ (5) π‘Žπ‘›π‘‘ π‘π‘™π‘Žπ‘›π‘’ 𝛱: π‘₯ + 2𝑦 + 3𝑧 = 5
βˆ’1
6
To check if the line is parallel to the plane do dot product of direction vector of the line and
normal vector to the plane. If dot product is not zero the line and the plane are not parallel
and the line will intersect the plane in one point.
Substitute the line equation into the plane equation to obtain the value of the line parameter µ.
(1 + 4πœ‡) + 2(βˆ’2 + 5πœ‡) + 3(βˆ’1 + 6πœ‡) = 5 οƒž 1 + 4µ βˆ’ 4 + 10µ βˆ’ 3 + 18µ = 5
β‡’ πœ‡=
11
32
Substitute µ into the equation of the line to obtain the co-ordinates of the point of intersection:
76
1
(βˆ’9)
32
34
In general: Solve for µ and substitute into the equation of the line to get the point of intersection. If
this equation gives you something like 0 = 5, then the line will be parallel and not in the plane, and if
the equation gives you something like 5 = 5 then the line is contained in the plane.
System of Linear Equations
Consider the following linear system :
The augmented matrix
π‘₯ + 3𝑦 βˆ’ 2𝑧 = 5
3π‘₯ + 5𝑦 + 6𝑧 = 7
2π‘₯ + 4𝑦 + 3𝑧 = 8
~
1 3
[3 5
2 4
βˆ’2 5
6 | 7]
3 8
THE A LINEAR SYSTEM MAY BEHAVE IN ANY OF THREE POSSIBLE WAYS
1. The system has a single unique solution.
2. The system has no solution.
3. The system has infinitely many solutions.
For three variables, each linear equation is a plane in 3-D space, and the solution set is the intersection
of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set.
10
Example: solution is a single point: Combine two at the time to eliminate one variable
1. step:
(1. π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›) βˆ™ (βˆ’3) + (2. π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›) = 𝑛𝑒𝑀 π‘ π‘’π‘π‘œπ‘›π‘‘ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› (π‘›π‘œ π‘₯ π‘Žπ‘›π‘¦ π‘šπ‘œπ‘Ÿπ‘’)
(1. π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›) βˆ™ (βˆ’2) + (3. π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›) = 𝑛𝑒𝑀 π‘‘β„Žπ‘–π‘Ÿπ‘‘ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› (π‘›π‘œ π‘₯ π‘Žπ‘›π‘¦ π‘šπ‘œπ‘Ÿπ‘’)
2. step
1
(2. π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›) βˆ™ (βˆ’ ) + (3. π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘›) = 𝑛𝑒𝑀 π‘‘β„Žπ‘–π‘Ÿπ‘‘ π‘’π‘žπ‘’π‘Žπ‘‘π‘–π‘œπ‘› (π‘›π‘œ 𝑦 π‘Žπ‘›π‘¦ π‘šπ‘œπ‘Ÿπ‘’)
2
βˆ’2π‘₯ βˆ’ 6𝑦 + 4𝑧 = βˆ’10
βˆ’3π‘₯ βˆ’ 9𝑦 + 6𝑧 = βˆ’15
π‘₯ + 3𝑦 βˆ’ 2𝑧 = 5
3π‘₯ + 5𝑦 + 6𝑧 = 7
2π‘₯ + 4𝑦 + 3𝑧 = 8
~
2𝑦 βˆ’ 6𝑧 = 4
π‘₯ + 3𝑦 βˆ’ 2𝑧 = 5
βˆ’4𝑦 + 12𝑧 = βˆ’8
βˆ’2𝑦 + 7𝑧 = βˆ’2
~
π‘₯ + 3𝑦 βˆ’ 2𝑧 = 5
βˆ’4𝑦 + 12𝑧 = βˆ’8
𝑧=2
This is equivalent to say that augmented matrix is in reduced row echelon form – augmented matrix
with the zeroes in the bottom left corner.
1 3 βˆ’2 5
[ 3 5 6 | 7]
2 4 3 8
~
solution::
𝒛=𝟐
1 3 βˆ’2 5
[0 βˆ’4 12 | βˆ’8]
0 βˆ’2 7 βˆ’2
𝐲=πŸ–
~
π‘₯ + 3𝑦 βˆ’ 2𝑧 = 5
1 3 βˆ’2 5
[0 βˆ’4 12 | βˆ’8] ~ βˆ’4𝑦 + 12𝑧 = βˆ’8
0 0
1 2
𝑧=2
𝒙 = βˆ’πŸπŸ“
Generally, using this method we get augmented matrix of coefficients of the system in echelon form
Echelon form
π‘Ž
[0
0
𝑏
𝑒
0
π‘Žπ‘₯ + 𝑏𝑦 + 𝑐𝑧 = 𝑑
𝑐 𝑑
π‘˜
β†’ 𝑦&π‘₯
𝑓 | 𝑔] ~
𝑒𝑦 + 𝑓𝑧 = 𝑔 β†’ 𝑧 =
β„Ž
β„Ž π‘˜
β„Žπ‘§ = π‘˜
π‘˜
π’–π’π’Šπ’’π’–π’† π’”π’π’π’–π’•π’Šπ’π’:
β„Ž β‰ 0 β†’ 𝑧 =β„Ž β†’ 𝑦&π‘₯
(π‘˜ π‘šπ‘Žπ‘¦ π‘œπ‘Ÿ π‘šπ‘Žπ‘¦ π‘›π‘œπ‘‘ 𝑏𝑒 0)
𝒏𝒐 π’”π’π’π’–π’•π’Šπ’π’:
β„Ž = 0 π‘Žπ‘›π‘‘ π‘˜ β‰  0 β†’ 0 βˆ™ 𝑧 = π‘˜ β‰  0 π‘€β„Žπ‘–π‘β„Ž 𝑖𝑠 π‘Žπ‘π‘ π‘’π‘Ÿπ‘‘ β†’
π‘‘β„Žπ‘’π‘Ÿπ‘’ 𝑖𝑠 π‘›π‘œ π‘ π‘œπ‘™π‘’π‘‘π‘–π‘œπ‘› (π‘ π‘¦π‘ π‘‘π‘’π‘š 𝑖𝑠 π’Šπ’π’„π’π’π’”π’Šπ’”π’•π’†π’π’•)
π’Šπ’π’‡π’Šπ’π’Šπ’•π’†π’π’š π’Žπ’‚π’π’š π’”π’π’π’–π’•π’Šπ’π’π’”: β„Ž = 0 π‘Žπ‘›π‘‘ π‘˜ = 0 β†’ 𝑧 π‘π‘Žπ‘› 𝑏𝑒 π‘Žπ‘›π‘¦ π‘›π‘’π‘šπ‘π‘’π‘Ÿ,
π‘ π‘œ 𝑀𝑒 π‘€π‘Ÿπ‘–π‘‘π‘’:
𝑧 = 𝑑 π‘€β„Žπ‘’π‘Ÿπ‘’ 𝑑 ∈ 𝑅
π‘₯ = π‘₯(𝑑)
𝑦 = 𝑦(𝑑)
𝑧=𝑑
Parametric representation of infinitely many solutions is not unique. We expressed the
variable which was not free (z) in terms of the parameter. We eliminated from row 3
variables x and y. It does not have to be that way. We could eliminate z and y to get x,
and then express y and z.
One can solve for any of the variables. Of course, the solution set will look different.
However, it will still represent the same solutions.
11
Example:
π‘’π‘›π‘–π‘žπ‘’π‘’ π‘ π‘œπ‘™π‘’π‘‘π‘–π‘œπ‘›:
1 1
[0 2
0 0
π‘›π‘œ π‘ π‘œπ‘™π‘’π‘‘π‘–π‘œπ‘›:
1
[0
0
𝑖𝑛𝑓𝑖𝑛𝑒𝑑𝑒𝑙𝑦 π‘šπ‘Žπ‘›π‘¦ π‘ π‘œπ‘™π‘’π‘‘π‘–π‘œπ‘›π‘ :
1 1
[0 2
0 0
1 1
2| 2] β†’ 𝑧 = 2
3 6
1 1 1
2 2| 2] β†’ 𝑧 βˆ™ 0 = 3
0 0 3
𝑦 = βˆ’1 π‘₯ = 0 𝑃(0, βˆ’1,2)
π‘Žπ‘π‘ π‘’π‘Ÿπ‘‘
1 1
π‘§βˆ™0=0 β‡’ 𝑧 =π‘‘πœ–π‘…
2| 2] β†’ ⏟
π‘‘π‘Ÿπ‘’π‘’ π‘“π‘œπ‘Ÿ π‘Žπ‘›π‘¦ 𝑧
0 0
𝑦 =1βˆ’π‘‘
π‘₯=0
Counting principles, including permutations and combinations.
The binomial theorem: expansion of (𝒂 + 𝒃)𝒏 , 𝒏 𝜺 𝑡.
● THE PRODUCT RULE
If there are π‘š different ways of performing an operation and for each of these there are 𝑛
different ways of performing a second independent operation, then there are π‘šπ‘› different
ways of performing the two operations in succession.
The product principle can be extended to three or more successive operations.
The number of different ways of performing an operation is equal to the sum of the
different mutually exclusive possibilities.
● COUNTING PATHS
The word π‘Žπ‘›π‘‘ suggests multiplying the possibilities
The word π‘œπ‘Ÿ suggests adding the possibilities.
If the order doesn't matter, it is a Combination.
If the order does matter it is a Permutation.
● PERMUTATIONS (order matters)
A permutation of a group of symbols is any arrangement of those symbols in a definite order.
● Permutations of 𝒏 different object : 𝒏!
Explanation: Assume you have n different symbols and therefore n places to fill in your
arrangement. For the first place, there are n different possibilities. For the second place, no
matter what was put in the first place, there are n – 1 possible symbols to place, for the rth
place there are n – r +1 possible places until the point where r = n, at which point we have
saturated all the places. According to the product principle, therefore, we have n (n – 1)(n –
2)(n – 3)β‹―1 different arrangements, or n!
12
Wise Advice: If a group of items have to be kept together, treat the items as one object.
Remember that there may be permutations of the items within this group too.
● Permutations of π’Œ different objects out of 𝒏 different available :
𝒏 βˆ™ (𝒏 βˆ’ 𝟏) βˆ™βˆ™βˆ™ (𝒏 βˆ’ π’Œ + 𝟏)
Suppose we have 10 letters and want to make groups of 4 letters. For four-letter
permutations, there are 10 possibilities for the first letter, 9 for the second, 8 for the third,
and 7 for the last letter. We can find the total number of different four-letter permutations by
multiplying 10 x 9 x 8 x 7 = 5040. Good logic to apply to similar questions straightforward!!!
● Permutations with repetition of π’Œ different objects out of 𝒏 different available = π’π’Œ
There are n possibilities for the first choice, THEN there are n possibilities for the second
choice, and so on, multiplying each time.)
● COMBINATIONS (order doesn’t matters)
It is the number of ways of choosing π’Œ objects out of 𝒏 available given that
β–ͺ The order of the elements does not matter.
β–ͺ The elements are not repeated [such as lottery numbers (2,14,15,27,30,33)]
ο‚·
ο‚·
The easiest way to explain it is to:
assume that the order does matter (ie permutations),
then alter it so the order does not matter.
Since the combination does not take into account the order, we have to divide the
permutation of the total number of symbols available by the number of redundant
possibilities. π’Œ selected objects have a number of redundancies equal to the permutation of
the objects π’Œ! (since order doesn’t matter) However, we also need to divide the permutation
n! by the permutation of the objects that are not selected, that is to say (𝒏 βˆ’ π’Œ)! .
𝒏!
π’Œ! (𝒏 βˆ’ π’Œ)!
𝒏!
𝒏(𝒏 βˆ’ 𝟏)(𝒏 βˆ’ 𝟐)(𝒏 βˆ’ π’Œ + 𝟏)
𝒏
( ) ≑ π‘ͺπ’π’Œ ≑ 𝒏π‘ͺπ’Œ =
=
π’Œ
π’Œ! (𝒏 βˆ’ π’Œ)!
π’Œ!
● Binomial Expansion/Theorem
𝑛
𝑛
𝑛
𝑛
(π‘Ž + 𝑏) = βˆ‘ ( ) π‘Žπ‘›βˆ’π‘˜ 𝑏 π‘˜ = π‘Žπ‘› + ( ) π‘Žπ‘›βˆ’1 𝑏 + β‹― + ( ) π‘Žπ‘›βˆ’π‘˜ 𝑏 π‘˜ + β‹― + 𝑏 𝑛
π‘˜
π‘˜
1
𝑛
π‘˜=0
π‘›πœ€π‘
𝑛
( ) ≑1
0
0! ≑ 1
13
● Binomial Coefficient
𝑛
( ) is the coefficient of the term containing π‘Žπ‘›βˆ’π‘˜ 𝑏 π‘˜ in the expansion of (π‘Ž + 𝑏)𝑛
π‘˜
𝑛(𝑛 βˆ’ 1)(𝑛 βˆ’ 2) β‹― (𝑛 βˆ’ π‘˜ + 1)
𝑛!
𝑛!
𝑛
𝑛
( )=
=
=
=(
)
π‘˜
π‘›βˆ’π‘˜
(𝑛 βˆ’ π‘˜)! π‘˜!
π‘˜!
π‘˜! (𝑛 βˆ’ π‘˜)!
𝑛
π‘‡β„Žπ‘’ π‘”π‘’π‘›π‘’π‘Ÿπ‘Žπ‘™ π‘‘π‘’π‘Ÿπ‘š π‘œπ‘Ÿ (π‘˜ + 1)π‘‘β„Ž π‘‘π‘’π‘Ÿπ‘š 𝑖𝑠: π‘‡π‘˜+1 = ( ) π‘Žπ‘›βˆ’π‘˜ 𝑏 π‘˜
π‘˜
The constant term is the term containing no variables.
When finding the coefficient of π‘₯ 𝑛 always consider the set of all terms containing π‘₯ 𝑛
Probability
ο‚·
ο‚·
ο‚·
ο‚·
ο‚·
The number of trials is the total number of times the β€œexperiment” is repeated.
The outcomes are the different results possible for one trial of the experiment.
Equally likely outcomes are expected to have equal frequencies.
The sample space, U, is the set of all possible outcomes of an experiment.
And event is the occurrence of one particular outcome.
P(A) is the probability of an event A occurring in one trial,
n(A) is the number of times event A occurs in the sample space
n(U) is the total number of possible outcomes.
𝑛(𝐴)
𝑃(𝐴) =
𝑛(π‘ˆ)
ο‚·
Complementary Events
Two events are described as complementary if they are the only two possible outcomes.
Two complementary events are mutually exclusive.
Since an event must either occur or not occur, the probability of the event either occurring or not
occurring must be 1.
𝑷(𝑨) + 𝑷(𝑨′ ) = 𝟏
Use when you need probability that an event will not happen
ο‚·
Possibility when we are interested in more than one outcome
(events are β€œand”, β€œor”, β€œat least”)
ο‚·
Combined Events
βˆͺ (π‘’π‘›π‘–π‘œπ‘›) ≑ π‘’π‘–π‘‘β„Žπ‘’π‘Ÿ
∩ (π‘–π‘›π‘‘π‘’π‘Ÿπ‘ π‘’π‘π‘‘π‘–π‘œπ‘›) ≑ π‘π‘œπ‘‘β„Ž/π‘Žπ‘›π‘‘
Given two events, B and A, the probability of at least one of the two events occurring,
𝑃(𝐴 βˆͺ 𝐡) = 𝑃(𝐴) + 𝑃(𝐡) βˆ’ 𝑃(𝐴 ∩ 𝐡)
either A
or B
or both
P(A) includes part of B from intersection
P(B) includes part of A from intersection
𝑃(𝐴 ∩ 𝐡) (both A and B)
was counted twice, so one has
to be subtracted
14
𝐼𝑑 𝑖𝑠 π‘–π‘šπ‘π‘œπ‘Ÿπ‘‘π‘Žπ‘›π‘‘ π‘‘π‘œ π‘˜π‘›π‘œπ‘€ β„Žπ‘œπ‘€ π‘‘π‘œ 𝑔𝑒𝑑 𝑃(𝐴 ∩ 𝐡)
For mutually exclusive events (no possibility that A and B occurring at the same time)
Turning left and turning right (you can't do both at the same time)
Tossing a coin: Heads and Tails
𝑃(𝐴 βˆͺ 𝐡) = 𝑃(𝐴) + 𝑃(𝐡)
𝑃(𝐴 ∩ 𝐡) = βˆ…
For non - mutually exclusive we are going to find conditional probability for
Independent and Dependent Events
A bag contains three different kinds of marbles: red, blue and green. You pick the marble twice. Probability of
picking up the red one (or any) the second time depends weather you put back the first marble or not.
β€’
Independent Events:
β€’
the probability that one event occurs
in no way affects the probability of
the other event occurring.
Dependent Events:
probability of one event occurring influences
the likelihood of the other event
You put the first marble back
You don’t put the first marble
∎ Conditional Probability:
Given two events, B and A, the conditional probability of an event A is the probability that
the event will occur given the knowledge that an event B has already occurred.
This probability is written as (notation for the probability of A given B)
P (A|B )
Probability of the intersection of A and B (both events occur) is: 𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐡)𝑃(𝐴|𝐡)
β€’
Independent Events:
β€’
𝑃(𝐴|𝐡) = 𝑃(𝐴) = 𝑃(𝐴|𝐡′ )
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐡)𝑃(𝐴|𝐡)
𝐴 π‘‘π‘œπ‘’π‘  π‘›π‘œπ‘‘ 𝑑𝑒𝑝𝑒𝑛𝑑 π‘œπ‘› 𝐡 π‘›π‘œπ‘Ÿ π‘œπ‘› 𝐡′
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐴)𝑃(𝐡)
Dependent Events:
𝑃(𝐴|𝐡) π‘π‘Žπ‘™π‘π‘’π‘™π‘Žπ‘‘π‘’π‘‘ 𝑑𝑒𝑝𝑒𝑛𝑑𝑖𝑛𝑔 π‘œπ‘› π‘‘β„Žπ‘’ 𝑒𝑣𝑒𝑛𝑑 𝐡
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐡)𝑃(𝐴|𝐡)
𝑃(𝐴|𝐡) =
𝑃(𝐴 ∩ 𝐡)
𝑃(𝐡)
● Use of Venn diagrams, tree diagrams and tables of outcomes to solve problems.
1. Venn Diagrams
The probability is found using the principle 𝑃(𝐴) =
𝑛(𝐴)
𝑛(π‘ˆ)
15
2. Tree diagrams
A more flexible method for finding probabilities is known as a tree diagram.
This allows one to calculate the probabilities of the occurrence of events, even where trials are
non-identical (where 𝑃(𝐴|𝐴) β‰  𝑃(𝐴)), through the product principle.
β§ͺ Bayes’ Theorem
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐡)𝑃(𝐴|𝐡)
𝑃(𝐴|𝐡) =
⟹
𝑃(𝐴 ∩ 𝐡) = 𝑃(𝐴)𝑃(𝐡|𝐴)
𝑃(𝐴 ∩ 𝐡) 𝑃(𝐴)𝑃(𝐡|𝐴)
=
𝑃(𝐡)
𝑃(𝐡)
π΅π‘Žπ‘¦π‘’π‘ β€² π‘‘β„Žπ‘’π‘œπ‘Ÿπ‘’π‘š
β–ͺ Another form of Bayes’ theorem (Formula booklet)
From tree diagram:
there are two ways to get A, either after B has happen or after B has not happened:
16
𝑃(𝐴) = 𝑃(𝐡)𝑃(𝐴|𝐡) + 𝑃(𝐡′)𝑃(𝐴|𝐡′)
⟹
𝑃(𝐡|𝐴) =
𝑃(𝐡)𝑃(𝐴|𝐡)
𝑃(𝐡)𝑃(𝐴|𝐡) + 𝑃(𝐡′)𝑃(𝐴|𝐡′)
β–ͺ Extension of Bayes’ Theorem
If there are more options than simply B occurs or B doesn’t occur, for example if there were
three possible outcomes for the first event B1, B2, and B3
Probability of A occurring is: 𝑃(𝐡1 )𝑃(𝐴|𝐡1 ) + 𝑃(𝐡2 )𝑃(𝐴|𝐡2 ) + 𝑃(𝐡3 )𝑃(𝐴|𝐡3 )
𝑃(𝐡𝑖 |𝐴) =
𝑃(𝐡𝑖 )𝑃(𝐴|𝐡𝑖 )
𝑃(𝐡1 )𝑃(𝐴|𝐡1 ) + 𝑃(𝐡2 )𝑃(𝐴|𝐡2 ) + 𝑃(𝐡3 )𝑃(𝐴|𝐡3 )
Outcomes B1, B2, and B3 must cover all the possible outcomes.
Descriptive Statistics
Concepts of population, sample, random sample and frequency distribution of discrete and continuous data.
ο‚·
A population is the set of all individuals with a given value for a variable associated with them.
ο‚·
A sample is a small group of individuals randomly selected (in the case of a random sample) from the
population as a whole, used as a representation of the population as a whole.
ο‚·
The frequency distribution of data is the number of individuals within a sample or population for each
value of the associated variable in discrete data, or for each range of values for the associated variable in
continuous data.
Presentation of data: frequency tables and diagrams
Grouped data: mid-interval values, interval width, upper and lower interval boundaries,
frequency histograms.
17
ο‚·
Mid interval values are found by halving the difference between the upper and lower interval boundaries.
ο‚·
The interval width is simply the distance between the upper and lower interval boundaries.
ο‚·
Frequency histograms are drawn with interval width proportional to bar width and frequency as the
height.
Median, mode; quartiles, percentiles.
Range; interquartile range; variance, standard deviation.
ο‚·
Mode (discrete data) is the most frequently occurring value in the data set.
ο‚·
Modal class (continuous data) is the most frequently occurring class.
ο‚·
Median is the middle value of an ordered data set.
For an odd number of data, the median is middle data.
For an even number of data, the median is average of two middle data.
ο‚·
Percentile is the score bellow which a certain percentage of the data lies.
ο‚·
Lower quartile (Q1) is the 25th percentile.
ο‚·
Median (Q2) is the 50th percentile.
ο‚·
Upper quartile (Q3) is the 75th percentile.
ο‚·
Range is the difference between the highest and lowest value in the data set.
ο‚·
The interquartile range is Q3βˆ’Q1.
ο‚·
Cumulative frequency is the frequency of all values less than a given value.
ο‚·
The population mean, ΞΌ is generally unknown but the sample mean, π‘₯Μ… used to serve as an unbiased
estimate of this mean. That used to be. From now on for the examination purposes, data will be treated as
the population. Estimation of mean and variance of population from a sample is no longer required.
Discrete and Continuous Random Variables
ο‚·
A variable X whose value depends on the outcome of a random process is called a random variable.
For any random variable there is a probability distribution/ function associated with it.
● Probability distribution/ function
Discrete Random Variables
P(X = x), the probability distribution of x, involves listing P(π‘₯𝑖 ) for each π‘₯𝑖 .
1. 0 ≀ 𝑃(𝑋 = π‘₯) ≀ 1
2. βˆ‘ 𝑃(𝑋 = π‘₯) = 1
π‘₯
3. 𝑃(𝑋 = π‘₯𝑛 )
= 1 βˆ’ βˆ‘ 𝑃(𝑋 = π‘₯π‘˜ )
π‘˜β‰ π‘›
[𝑃 (𝑒𝑣𝑒𝑛𝑑 π‘₯𝑛 π‘œπ‘π‘π‘’π‘Ÿπ‘ ) = 1 βˆ’ 𝑃(π‘Žπ‘›π‘¦ π‘œπ‘‘β„Žπ‘’π‘Ÿ 𝑒𝑣𝑒𝑛𝑑 π‘œπ‘π‘π‘’π‘Ÿπ‘ )]
18
Discrete Random Variables X defined on a ≀ x ≀ b
probability density function (p.d.f.), f (x), describes the relative likelihood for this variable to
take on a given value
cumulative distribution function ( c.d.f.), 𝐹(𝑑), is found by integrating the p.d.f. between the
minimum value of X and t
𝑑
𝐹(𝑑) = 𝑃(𝑋 ≀ 𝑑) = ∫ 𝑓(π‘₯)𝑑π‘₯
π‘Ž
1.
𝑓(π‘₯) β‰₯ 0
π‘“π‘œπ‘Ÿ π‘Žπ‘™π‘™ π‘₯ πœ– (π‘Ž, 𝑏)
𝑏
2. ∫ 𝑓(π‘₯) = 1
π‘Ž
𝑑
3. π‘“π‘œπ‘Ÿ π‘Žπ‘›π‘¦ π‘Ž ≀ 𝑐 < 𝑑 ≀ 𝑏,
𝑃(𝑐 < 𝑋 < 𝑑) = ∫ 𝑓(π‘₯)𝑑π‘₯
𝑐
β–ͺ πΉπ‘œπ‘Ÿ π‘Ž π‘π‘œπ‘›π‘‘π‘–π‘›π‘’π‘œπ‘’π‘  π‘Ÿπ‘Žπ‘›π‘‘π‘œπ‘š π‘£π‘Žπ‘Ÿπ‘–π‘Žπ‘π‘™π‘’, π‘‘β„Žπ‘’ π‘π‘Ÿπ‘œπ‘π‘Žπ‘π‘–π‘™π‘–π‘‘π‘¦ π‘œπ‘“ π‘Žπ‘›π‘¦ 𝑠𝑖𝑛𝑔𝑙𝑒 π‘£π‘Žπ‘™π‘’π‘’ 𝑖𝑠 π‘§π‘’π‘Ÿπ‘œ
𝑃(𝑋 = 𝑐) = 0
ο‚·
β‡’ 𝑃(𝑐 ≀ 𝑋 ≀ 𝑑) = 𝑃(𝑐 < 𝑋 < 𝑑) = 𝑃(𝑐 ≀ 𝑋 < 𝑑) 𝑒𝑑𝑐.
Expected value (or mean) is a weighted average of the possible values that X can take, each value
being weighted according to the probability of that event occurring.
Discrete Random Variables
Continuous Random Variables
∞
𝑬(𝑿) = 𝝁 = βˆ‘ 𝒙 𝑷(𝑿 = 𝒙)
𝑬(𝑿) = 𝝁 = ∫ 𝒙 𝒇(𝒙)𝒅𝒙
βˆ’βˆž
sum of: [(each of the possible outcomes)
× (the probability of the outcome occurring)].
Properties of expected value 𝑬(𝑿)
1. 𝐼𝑓 π‘Ž π‘Žπ‘›π‘‘ 𝑏 π‘Žπ‘Ÿπ‘’ π‘π‘œπ‘›π‘ π‘‘π‘Žπ‘›π‘‘π‘ , π‘‘β„Žπ‘’π‘› 𝐸(π‘Žπ‘‹ + 𝑏) = π‘ŽπΈ(𝑋) + 𝑏
2. 𝐸(𝑋 + π‘Œ ) = 𝐸(𝑋) + 𝐸(π‘Œ )
Expected Value of a Function of X
𝐸[ 𝑔(𝑋)], π‘€β„Žπ‘’π‘Ÿπ‘’ 𝑔(𝑋) 𝑖𝑠 π‘Ž π‘“π‘’π‘›π‘π‘‘π‘–π‘œπ‘› π‘œπ‘“ 𝑋 𝑖𝑠:
Discrete Random Variables
Continuous Random Variables
𝐸[ 𝑔(𝑋)] = βˆ‘ 𝑔(π‘₯)𝑃(𝑋 = π‘₯)
𝐸[ 𝑔(𝑋)] = ∫ 𝑔(π‘₯)𝑓(π‘₯)𝑑π‘₯
19
ο‚· Mode is the most likely value of X.
Discrete Random Variables
The mode is the value of x with largest 𝑃(𝑋 = π‘₯) which can be different from the expected value
Continuous Random Variables
The mode is the value of x where f(x) is maximum (which may not be unique).
ο‚·
Median
Discrete Random Variables
The median of a discrete random variable is the "middle" value. It is the value of X such that
1
2
𝑃(𝑋 ≀ π‘₯) β‰₯
π‘Žπ‘›π‘‘
𝑃(𝑋 β‰₯ π‘₯) β‰₯
1
2
Continuous Random Variables
The median of a random variable X is a number m such that
π‘š
∫ 𝑓(π‘₯) =
π‘Ž
1
2
The median m is the number for which the probability is exactly ½ that the random variable will
have a value greater than m, and ½ that it will have a value less than m.
ο‚· Variance
The variance of a random variable tells us something about the spread of the possible values of the
variable. Variance, Var( X ), is defined as the average of the squared differences of X from the mean:
𝑽𝒂𝒓(𝑿) = 𝝈𝟐 = 𝑬 (𝑿 – 𝝁)𝟐 = 𝑬(π‘ΏπŸ ) βˆ’ 𝝁𝟐
Discrete Random Variables
𝑽𝒂𝒓(𝑿) = 𝛴π‘₯ 2 𝑃(𝑋 = π‘₯) βˆ’ πœ‡2
Continuous Random Variables
𝑏
𝑽𝒂𝒓(𝑿) = ∫ π‘₯ 2 𝑓(π‘₯)𝑑π‘₯ βˆ’ πœ‡2
π‘Ž
Properties of variance
Note that the variance does not behave in the same way as expectation when we multiply and add
constants to random variables. In fact:
20
𝑉 π‘Žπ‘Ÿ(π‘Žπ‘‹ + 𝑏) = π‘Ž2 π‘‰π‘Žπ‘Ÿ(𝑋)
𝐼𝑛 π‘”π‘’π‘›π‘’π‘Ÿπ‘Žπ‘™ π‘‰π‘Žπ‘Ÿ[𝑋 + π‘Œ] β‰  π‘‰π‘Žπ‘Ÿ(𝑋) + π‘‰π‘Žπ‘Ÿ(π‘Œ)
(only for independent βˆ’ not IB)
Standard deviation of X
𝜎 = βˆšπ‘‰π‘Žπ‘Ÿ(𝑋)
Binomial Distribution - Discrete
There are three criteria that must be met in order for a random probability distribution to be a binomial
distribution.
ο‚§
The experiment consists of n repeated trials.
ο‚§
Each trial can result in just two possible outcomes. We call one of these outcomes a success and
the other, a failure.
ο‚§
The probability of success, denoted by p, is the same on every trial.
ο‚§
The trials are independent; that is, the outcome on one trial does not affect the outcome on other
trials – the probability of success is a constant in each trial
If a random variable X has a binomial distribution, we write
β€’ 𝑿 ~ 𝑩(𝒏, 𝒑)
(~ means β€˜has distribution…’).
and the probability density function is:
𝑛
β€’ 𝑃(𝑋 = π‘₯) = ( ) 𝑝 π‘₯ (1 βˆ’ 𝑝)π‘›βˆ’π‘₯
π‘₯
π‘₯ = 0, 1, … , 𝑛
β€’ n is number of trials
β€’ p is the probability of a success
β€’ (1 – p) is the probability of a failure.
If X is a random variable is binomial with parameters n and p, then mean and variance are: β€’
𝐸(𝑋) = πœ‡ = 𝑛𝑝
β€’ π‘‰π‘Žπ‘Ÿ(𝑋) = 𝜎 2 = 𝑛𝑝(1 βˆ’ 𝑝)
21
Poisson Distribution
A discrete random variable X with a probability distribution function (p.d.f.) of the form:
β€’ 𝑃(𝑋 = π‘₯) =
π‘š π‘₯ 𝑒 βˆ’π‘š
,
π‘₯!
π‘₯ = 0, 1, 2, …
is said to be a Poisson random variable with parameter m. We write β€’ 𝑋 ~ π‘ƒπ‘œ(π‘š)
Mean and Variance
𝐼𝑓 𝑋 ~ π‘ƒπ‘œ(π‘š), π‘‘β„Žπ‘’π‘›
β€’ 𝐸(𝑋) = π‘š
β€’ π‘‰π‘Žπ‘Ÿ(𝑋) = π‘š
Random Events
The Poisson distribution is useful because many random events follow it.
If a random event has a mean number of occurrences m in a given time period, then the number of
occurrences within that time period will follow a Poisson distribution.
For example, the occurrence of earthquakes could be considered to be a random event. If there are
5 major earthquakes each year, then the number of earthquakes in any given year will have a
Poisson distribution with parameter 5.
There are three criteria that must be met in order for a random probability distribution to be a
Poisson distribution.
ο‚§
ο‚§
ο‚§
The average number of occurrences (m) is constant for every interval.
The probability of more than one occurrence in a given interval is very small.
The number of occurrences in disjoint intervals are independent of each other.
Sums of Poissons
Suppose X and Y are independent Poisson random variables with parameters β„“ and m respectively. Then
X + Y has a Poisson distribution with parameter β„“ + m.
In other words: If 𝑿 ~ 𝑷𝒐(𝓡), and 𝒀 ~ 𝑷𝒐(π’Ž), ), then 𝑿 + 𝒀~ 𝑷𝒐(𝓡 + π’Ž)
22
Normal distribution. Standardization of normal variables.
A continuous random variable X follows a normal distribution if it has the following probability
density function :
β€’ 𝑓(π‘₯) =
1
𝜎√2πœ‹
𝑒
1 π‘₯βˆ’πœ‡ 2
βˆ’ (
)
2 𝜎
βˆ’βˆž<π‘₯ <∞
The parameters of the distribution are μ and 𝝈𝟐 , where μ is the mean (expectation) of the
distribution and 𝜎 2 is the variance.
𝑿 ~ 𝑡(𝝁, 𝝈𝟐 ) the random variable X has a normal distribution with parameters ΞΌ and 𝜎2 .
Properties
1.
𝑑𝑓
1
π‘₯ βˆ’ πœ‡ βˆ’1(π‘₯βˆ’πœ‡)2
𝑓(π‘₯) = βˆ’
(
)𝑒 2 𝜎
=0
𝑑π‘₯
𝜎
𝜎 2 √2πœ‹
β‡’ π‘šπ‘Žπ‘₯π‘–π‘šπ‘’π‘š :
2.
𝑑𝑓
1
π‘₯ βˆ’ πœ‡ 2 βˆ’1(π‘₯βˆ’πœ‡)2
𝑓(π‘₯) = βˆ’
) ]𝑒 2 𝜎
=0
[1 βˆ’ (
𝑑π‘₯
𝜎
𝜎 3 √2πœ‹
β‡’ π‘–π‘›π‘“π‘™π‘’π‘π‘‘π‘–π‘œπ‘›π‘ : 𝒙 = 𝝁 ± 𝝈
𝒙=𝝁
The curve is symmetrical about the line x = ΞΌ
β–ͺ
lim 𝑓(π‘₯) = 0
π‘₯β†’±βˆž
∞
β–ͺ
∫ 𝑓(π‘₯) = 1
βˆ’βˆž
β–ͺ πœ‡ = π‘šπ‘Žπ‘₯{𝑓(π‘₯)}
For a normal curve, standard deviation Οƒ is uniquely determined as the horizontal distance from the
vertical line of symmetry π‘₯ = πœ‡ to the point of inflection.
In a normal distribution, 68.26% of values lie within one standard deviation of the mean, 95.4% of values
lie within two standard deviations of the mean and 99.74% of values lie within three standard deviations
of the mean.
Standard score or z – score is the number of standard deviations from the mean.
The Standard Normal Distribution
If 𝑍 ~ 𝑁(0, 1), then Z is said to follow a standard normal distribution.
P(Z < z) is known as the cumulative distribution function of the random variable Z. For the standard
normal distribution, this is usually denoted by F(z). Normally, you would work out the c.d.f. by doing
some integration. However, it is impossible to do this for the normal distribution and so results have to be
looked up in statistical tables.
23
Standardising: Now, the mean and variance of the normal distribution can be any value and so clearly
there can't be a statistical table for each one. Instead, we convert to the standard normal distribution- we
can also use statistical tables for the standard normal distribution to find the c.d.f. of any normal
distribution.
Cumulative Distribution Functions is defined as 𝑃(𝑋 ≀ π‘₯) and that what we need
The standard normal distribution, or Z-distribution, is the application of the transformation
𝑍=
π‘‹βˆ’πœ‡
𝜎
to a normal X-distribution, such that the mean is at x = 0 and there is one standard deviation per unit on
the x-axis. Where the probability density function for normal distribution has two parameters and , the Zdistribution has none. This makes it useful when comparing results from two or more different normal
distributions, since comparing Z-values allows one to take into account the standard deviation and mean
when comparing results.
Finding probabilities with a GDC involves using normalcdf(a,b,ΞΌ ,Οƒ ) for lower limit a and upper limit b
(under β€œDISTR”). It is important to note that (𝑍 ≀ π‘Ž) = 𝑃(𝑍 < π‘Ž) .
24
CALCULATOR
Binomial Distribution
ο‚· BinomPDF(trials , probability of event , value)
o Gives the probability for a particular number
of success in n trials
β€’ 𝑿 ~ 𝑩(𝒏, 𝒑)
𝑛
β€’ 𝑃(𝑋 = π‘₯) = ( ) 𝑝 π‘₯ (1 βˆ’ 𝑝)π‘›βˆ’π‘₯
π‘₯
= 0, 1, … , 𝑛
π‘₯
ο‚· BinomCDF(trials , probability of event , value)
o Gives cumulative probability, i.e. the number
of successes within n trials is at most the value
β€’ n is number of trials
β€’ p is the probability of a success
β€’ (1 – p) is the probability of a failure.
β€’ π‘‰π‘Žπ‘Ÿ(𝑋) = 𝜎 2 = 𝑛𝑝(1 βˆ’ 𝑝)
β€’ 𝐸(𝑋) = πœ‡ = 𝑛𝑝
𝑃(𝑋 = π‘₯ ) = π‘π‘–π‘›π‘œπ‘šπ‘π‘‘π‘“ (𝑛, 𝑝, π‘₯)
𝑃(𝑋 ≀ π‘₯) = π‘π‘–π‘›π‘œπ‘šπ‘π‘‘π‘“(𝑛, 𝑝, π‘₯)
Poisson Distribution β€’ 𝑋 ~ π‘ƒπ‘œ(π‘š)
π‘₯ βˆ’π‘š
β€’ 𝑃(𝑋 = π‘₯) =
π‘š 𝑒
π‘₯!
,
π‘₯ = 0, 1, 2, …
β€’ 𝐸(𝑋) = π‘š
ο‚· PoissonPDF(mean , value)
o Gives probability of a particular number
of occurrences within a time period
β€’ π‘‰π‘Žπ‘Ÿ(𝑋) = π‘š
𝑃(𝑋 = π‘₯ ) = π‘π‘œπ‘–π‘ π‘ π‘œπ‘›π‘π‘‘π‘“ (π‘š, π‘₯)
𝑃(𝑋 ≀ π‘₯) = π‘π‘œπ‘–π‘ π‘ π‘œπ‘›π‘π‘‘π‘“(π‘š, π‘₯)
Normal distribution. β€’ 𝑋 ~ 𝑁(πœ‡, 𝜎 2 )
β€’ 𝑓(π‘₯) =
1
𝜎√2πœ‹
𝑒
ο‚· PoissonCDF(mean , value)
o Gives cumulative probability, i.e.
probability of at most (value) occurrences
within a time period
1 π‘₯βˆ’πœ‡ 2
βˆ’ (
)
2 𝜎
ο‚·ο€ NormalCDF(lower , upper , mean , SD)
o Gives probability that a value is within a given range
βˆ’βˆž<π‘₯ <∞
𝑃(𝑋 β‰₯ π‘₯ ) = π‘›π‘œπ‘Ÿπ‘šπ‘Žπ‘™π‘π‘‘π‘“ ( π‘₯, 1𝐸99, πœ‡, 𝜎)
𝑃(π‘₯1 ≀ 𝑋 ≀ π‘₯2 ) = π‘›π‘œπ‘Ÿπ‘šπ‘Žπ‘™π‘π‘‘π‘“ ( π‘₯1 , π‘₯2 , πœ‡, 𝜎)
Standardized normal distribution. β€’ 𝑍 ~ 𝑁(0,1)
𝑃(𝑍 ≀ 𝑧 ) = π‘›π‘œπ‘Ÿπ‘šπ‘Žπ‘™π‘π‘‘π‘“ (βˆ’1𝐸99, π‘₯)
When given (𝑃 ≀ π‘Ž) = π‘₯
π‘–π‘›π‘£π‘π‘œπ‘Ÿπ‘š(π‘₯, πœ‡, 𝜎)
β€’ 𝑋 ~ 𝑁(πœ‡, 𝜎 2 )
π‘–π‘›π‘£π‘π‘œπ‘Ÿπ‘š(𝑧)
β€’ 𝑍 ~ 𝑁(0,1)
𝑃(𝑍 ≀ π‘Ž) = 𝑃(𝑍 < π‘Ž)
π‘π‘’π‘π‘Žπ‘’π‘ π‘’ 𝑃(𝑍 = π‘Ž) = 0
𝑃(𝑧1 ≀ 𝑍 ≀ 𝑧2 ) = π‘›π‘œπ‘Ÿπ‘šπ‘Žπ‘™π‘π‘‘π‘“ ( 𝑧1 , 𝑧2 )
ο‚· invNorm(percentage)
o Given a probability, gives the corresponding zscore, i.e. standard deviations from the mean