This document discusses Markov chains and their use in decision science. It defines a Markov chain as a system where the next state depends only on the current state, not past states. States can be absorbing or non-absorbing. Transition probabilities describe the likelihood of moving between states. A transition matrix lists the probabilities. Initial state probabilities and the transition matrix can be used to calculate state probabilities in future periods through repeated matrix multiplication. Markov chains can predict outcomes like future product demand. Examples and exercises are provided to help understand these concepts.
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
3.1.1 introduction to markov chain
1. www.sanjivanimba.org.in
Unit No.3.
DECISION SCIENCE
Presented By:
Dr. V. M. Tidake
Ph. D (Financial Management), MBA(FM), MBA(HRM) BE(Chem)
Dean, EDP & Associate Professor MBA
1
Sanjivani College of Engineering, Kopargaon
Department of MBA
www.sanjivanimba.org.in
2. www.sanjivanimba.org.in
302-DECISION SCIENCE
Unit No.3 Marko Chain & Simulation
3.1.1 Introduction to Markov
Chain
Presented By:
Dr. V. M. Tidake
Ph. D (Financial Management), MBA(FM), MBA(HRM) BE(Chem)
Dean EDP & Associate Professor MBA
2
Sanjivani College of Engineering, Kopargaon
Department of MBA
www.sanjivanimba.org.in
4. www.sanjivanimba.org.in
MARKOV CHAIN
A system of events (or outcomes) in which, an event
depends upon the immediate preceding event (only), but
not on the prior events is called as a Markov Chain.
Examples:
a. The market share for a product during a month.
b. Condition of machines to be used for production each
week.
5. www.sanjivanimba.org.in
STATE
Every System consists of several number of finite possible
outcomes called as States.
Examples:
a. Various brands of products represent states.
b. Working, Fairly Working and Non Working in case of
Machinery in an Organization.
Features:
a. They are finite in Number.
b. They are collectively exhaustive and mutually exclusive.
6. www.sanjivanimba.org.in
TYPES of STATES
Absorbing State:
If there is no tendency to leave the state
Non Absorbing State:
If there is tendency to leave the state.
Transition Probabilities:
The probabilities of the system to change from a state (i) to
a state (j) is called as transition probability.
7. www.sanjivanimba.org.in
TRANSITION MATRIX
A Matrix representing the states in one period and the states
in the next period, along with the Transition Probabilities
between them is called as Transition Matrix.
Example:
Draw a transition matrix, if over a time, it is found that
70% of the Customers using Brand A continue to use it
next year while 20% shift to Brand B and 10% to C.
Similarly, 60% of customers using B continue to use it
while 25% change it to A and 15% shift to C and for C,
75% are retaines while 20% are lost to A and 5% to B.
. 0.70 0.20 0.10
P = 0.25 0.60 0.15
0.20 0.05 0.75
9. www.sanjivanimba.org.in
Initial Condition
Initial Condition is the probabilities for the various
states (called State Probabilities), for the Initial
period of time.
E.g.: If Initially, the market share of Brands A, B & C
are 50%, 30% & 20% respectively, then the Initial
Condition (For period n = 0) is -
R0 = 0.5 0.3 0.2
10. www.sanjivanimba.org.in
Results
To find the state probabilities for the kth period of
time:
- Markov Chain can be used for predicting future.
- Thus, if a1, a2, a3 etc. represent the probabilities
for the various states in the initial period (n=0), we
can represent them by a row matrix as-
R0 = a1 a2 a3
- Hence the state probabilities for the next period
(say n=1) is-
R1 = R0*P
11. www.sanjivanimba.org.in
Results
- Similarly, the state probabilities for the period
n=2 is-
R2= R1*P = R0*P2 where P2 = P*P
- Thus in General the state probabilities for the kth
period is-
RK = RK-1*P = R0*PK