1. Luc_Faucheux_2021
THE RATES WORLD โ Part IV_a
Starting to look at modeling rates, a taxonomy of
modelsโฆappendix on IRMA, and mostly 2 factors
short rate models, plus my squid teacher
1
2. Luc_Faucheux_2021
Couple of notes on those slides
ยจ This is expanding a little the section on the IRMA function
ยจ More generally we are looking at multi factor short rate models
ยจ Trip down memory lane about some short rate models that were common in the โ80s and
โ90s
ยจ Actually most of those came from Salomon Brothers, those guys had a 3 factor model up
and running in the late โ70s, and with people moving to other firms it gradually became the
standard for a while
ยจ These days I do not think that anyone still use it, as most the of the industry moved to LMM,
BGM and FMM models based on the forwards, not the short rate, as computing became
cheaper and faster (still rates is the requirements are โinordinately demandingโ as Peter
Carr would say)
2
3. Luc_Faucheux_2021
IRMA
ยจ If you get bored with one-factor short rate models, you can using multi-factors short rate
models.
ยจ If you are a genius like Craig Fithian and worked at Salomon in 1972, you write (am using the
SIE form to be more compact) what got to be known worldwide as the 2+ IRMA model
ยจ !
๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
๐๐ง = โ๐#. (๐ง โ ๐ฅ โ ๐ฆ). ๐๐ก + ๐#. ([). ๐๐
#
ยจ With: < ๐๐
!. ๐๐
! >= ๐. ๐๐ก
ยจ And : ๐ ๐ก = ๐ผ๐ ๐๐ด[๐ง ๐ก + ๐ ๐ก ]
ยจ Where ๐ผ๐ ๐๐ด(z) is the IRMA function (Interest Rate Mapping I think) which created an
incredible stable skew, they had historical data on skew going back to the 1960
ยจ IRMA was not named after the hurricane from 2017
3
5. Luc_Faucheux_2021
IRMA - III
ยจ I recalled the day when working there I got my hands on the original code (which I think was
in FORTRAN) from 1972.
ยจ Thinks about it, when Black Sholes came out, Salomon Brothers was running its swap and
option desk with a 3-factor short rate model with IRMA skew !!
ยจ The trick was the calibration of the IRMA mapping.
ยจ It was defined with 3 variables originally (we then extended to 4 to account for negative
rates, and yours truly tried to replace IRMA with SQUID, more to come on this), but the
original three variables were:
ยจ <
Intercept ๐ผ
Slope ๐
Regime Change ๐
5
6. Luc_Faucheux_2021
IRMA - IV
ยจ The IRMA function ๐ผ๐ ๐๐ด ๐ง was actually defined by parametrizing the function:
ยจ ๐ ๐ =
$%&'( #
$#
=
$%&'(
$#
๐ผ๐ ๐๐ด)* ๐ = ๐ผ๐ ๐๐ดโฒ(๐ผ๐ ๐๐ด)* ๐ )
6
๐
๐(๐)
Regime Change ๐
Intercept ๐ผ
Slope ๐
7. Luc_Faucheux_2021
IRMA - V
ยจ Above the Regime Change ๐ , the function IRMA is a straight line of slope ๐
ยจ If that straight line was continued below the regime change ๐ it would intercept the y-axis at
the Intercept ๐ผ
ยจ Below the Regime Change ๐ , the function IRMA is a quadratic function that connect with
the straight line at the point (๐ , ๐ผ + ๐. ๐ ) and goes through the origin (0,0)
ยจ Note, when I got there in 2002, that function was still used throughout the firm and
matched the observed skew in a very stable and remarkable manner. We dabbled into
tweaking it for negative rates by essentially adding a parameter similar to the shifted
lognormal model, so that the above sentence got changed to:
ยจ Below the Regime Change ๐ , the function IRMA is a quadratic function that connect with
the straight line at the point (๐ , ๐ผ + ๐. ๐ ) and goes through a point (โ๐, 0) left of the origin
7
8. Luc_Faucheux_2021
IRMA - VI
ยจ You can see the beauty of this: ๐ ๐ก = ๐ผ๐ ๐๐ด[๐ง ๐ก + ๐ ๐ก ]
ยจ IF (๐ = 0, ๐ = 0, ๐ผ = ๐๐ก๐) then we have: ๐ ๐ = ๐ผ and so
$%&'( #
$#
= ๐ผ
ยจ ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง = ๐ผ. ๐ง + ๐ถ is a linear function of the Gaussian variable ๐ง ๐ก , the model will
produce a NORMAL skew
ยจ IF (๐ = ๐๐ก๐, ๐ = 0, ๐ผ = 0) then we have: ๐ ๐ = ๐. ๐ and so
$%&'( #
$#
= ๐. ๐ผ๐ ๐๐ด(๐ง)
ยจ We then get: ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง = exp ๐. ๐ง + ๐ถ
ยจ ๐ ๐ก is an exponential function of the Gaussian variable ๐ง ๐ก , the model will produce a
LOGNORMAL skew
ยจ The quadratic part under the Regime Change ๐ when non-zero will fold the distribution of
๐ง ๐ก back on positive rates, so the model avoids negative rates (which for a while was
deemed to be a good thing, unless things changed)
ยจ We will go back to all the beautiful ways we can parametrize IRMA to recover the market
skew and smile
8
9. Luc_Faucheux_2021
IRMA - VII
ยจ The 3 parameters had been calibrated to historical data for the skew covering like 40 years
of historical market moves or so, which was in itself amazing (the fact that Salomon had a
clean database that you could use that was going back so far)
ยจ The 3 parameters were surprisingly stable, and essentially produced something that was
getting Lognormal at low rates below the Regime Change ๐ , and closer to Normal above the
Regime Change ๐
ยจ Ask anyone who worked on the options desk there and worked with 2+IRMA, and they
might still remember by heart those parameters
ยจ <
Intercept ๐ผ = 0.06
Slope ๐=0.2
Regime Change ๐ = 0.01
ยจ Yours truly worked on implemented SQUID in order to recover the market skew and smile at
high strike (SQUID=Skew of Quadratic Interest rate Distribution)
9
10. Luc_Faucheux_2021
IRMA - VIII
ยจ The SQUID function ๐(๐) in order to recover skew and smile. Long live baby squid Cthulhu !!
10
๐
๐(๐)
Intercept ๐ผ
Slope ๐
Regime Change ๐ ! Quadratic Switch ๐ "
12. Luc_Faucheux_2021
IRMA - VIII
ยจ The model became quite widely known and adopted in the literature and in the financial
industry.
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
ยจ ๐๐ง = โ๐#. (๐ง โ ๐ฅ โ ๐ฆ). ๐๐ก + ๐#. ([). ๐๐
#
ยจ With: < ๐๐
!. ๐๐
! >= ๐. ๐๐ก
ยจ And : ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น(๐ง)
ยจ It is referred to as a 3 factors OU (Ornstein-Uhlenbeck) process
ยจ Remember Ornstein-Uhlenbeck is only a fancy way to say โLangevinโ
12
13. Luc_Faucheux_2021
IRMA - IX
ยจ At Citi, it was known as 2+
ยจ The reason it was known as 2+ is actually quite funny. A two factor version had been
implemented previously and had been approved internally by Risk, modeling,โฆ
ยจ A three factor version would have been too much work to go through all the documentation
and approval processes (remember at the time things were a little more flexible), so it was
easier to pass the new 3-factor model as a modification (a +) on the existing 2-factor model
and call it 2+
ยจ Every time an option trader drives on the highway and sees the sign for the HOV (High
Occupancy Lanes) with usually an indication of โ2+โ, meaning that you can be on the HOV if
you have 2 passengers or more in your car, he or she think with sweet longing about the 2+
IRMA model, at least I know I do
13
15. Luc_Faucheux_2021
IRMA - Piterbarg - I
ยจ In the literature, it is sometimes referred to (with some tweaks) as the Gaussian model.
ยจ In Piterbarg (p. 494), you see the original formulation of the โ2 Factor Gaussian modelโ
ยจ Piterbarg actually writes the equations as:
ยจ ๐๐ = โ๐+. ๐. ๐๐ก + ๐+. ([). ๐๐
+
ยจ ๐๐ = ๐,. {๐ ๐ก + ๐ ๐ก โ ๐ ๐ก }. ๐๐ก + ๐,. ([). ๐๐
,
ยจ With: < ๐๐
+. ๐๐
, >= ๐. ๐๐ก
15
16. Luc_Faucheux_2021
IRMA - Piterbarg - II
ยจ But you can break those down as:
ยจ Step 1: replace ๐ ๐ก by ๐ฅ ๐ก (that is an easy oneโฆ)
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ = ๐,. {๐ ๐ก + ๐ฅ ๐ก โ ๐ ๐ก }. ๐๐ก + ๐,. ([). ๐๐
,
ยจ With: < ๐๐
!. ๐๐
, >= ๐. ๐๐ก
16
17. Luc_Faucheux_2021
IRMA - Piterbarg - III
ยจ Step 2: forget about the second equation because this is a 2-factor model, not a 3-factor one
ยจ So forget about ๐ฆ being stochastic variable, it is a deterministic variable
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐ฆ(๐ก) = ๐ ๐ก
ยจ ๐๐ง = ๐#. {๐ฆ ๐ก + ๐ฅ ๐ก โ ๐ง ๐ก }. ๐๐ก + ๐#. ([). ๐๐
#
ยจ With: < ๐๐
!. ๐๐
# >= ๐. ๐๐ก
ยจ And : ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น ๐ง = ๐ง
ยจ That is as simple an IRMA function as we can get, and is called Gaussian because it does not
perturb the initial Gaussian distribution of the Langevin equations. I will add the slides from
the Langevin deck to remind us that the probability distribution is a Gaussian indeed
ยจ Also the linear transformation ๐น ๐ง = ๐ง keeps the Gaussian untouched
17
18. Luc_Faucheux_2021
IRMA - Piterbarg - IV
ยจ In that formulation:
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐ฆ(๐ก) = ๐ ๐ก
ยจ ๐๐ง = ๐#. {๐ฆ ๐ก + ๐ฅ ๐ก โ ๐ง ๐ก }. ๐๐ก + ๐#. ([). ๐๐
#
ยจ With: < ๐๐
!. ๐๐
# >= ๐. ๐๐ก
ยจ And : ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น ๐ง = ๐ง
ยจ It is usually customary to think of the variable ๐ฅ(๐ก) as the โshort-term rateโ or more exactly
shocks in the front end of the curve
ยจ The variable ๐ฆ(๐ก) = ๐ ๐ก is then the slope of the yield curve
ยจ Bear in mind though that this is still a short rate model and not a term structure model
18
21. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - I
ยจ In the Mercurio book it is referred to as the G2++ model, which is quite close to the original
โ2+โ terminology at Salomon Brothers (p.132)
21
22. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - II
ยจ Mercurio writes down the model as (p.133)
ยจ ๐๐ฅ = โ๐. ๐ฅ. ๐๐ก + ๐. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐. ๐ฆ. ๐๐ก + ๐. ([). ๐๐
"
ยจ ๐ ๐ก = ๐ฅ ๐ก + ๐ฆ ๐ก + ๐(๐ก)
ยจ With: < ๐๐
!. ๐๐
! >= ๐. ๐๐ก
ยจ We can see again that we can recast those in the 2+IRMA format with:
ยจ ๐! = ๐
ยจ ๐" = ๐
ยจ ๐! = ๐
ยจ ๐! = ๐
22
23. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - III
23
ยจ We then get:
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
ยจ ๐ ๐ก = ๐ฅ ๐ก + ๐ฆ ๐ก + ๐(๐ก)
ยจ With: < ๐๐
!. ๐๐
! >= ๐. ๐๐ก
ยจ Again because it is a 2 factor model, the third variable in this case is deterministic and set to:
ยจ ๐ง ๐ก = ๐ฅ ๐ก + ๐ฆ ๐ก
ยจ Which itself is the limit of the original:
ยจ ๐๐ง = โ๐#. (๐ง โ ๐ฅ โ ๐ฆ). ๐๐ก + ๐#. ([). ๐๐
#
ยจ With ๐# = 0 and ๐# โ โ
ยจ And then ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น ๐ง = ๐ง ๐ก + ๐(๐ก)
24. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - IV
ยจ So the G2++ model from Mercurio is also the 2+IRMA with the following:
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
ยจ ๐๐ง = โ๐#. (๐ง โ ๐ฅ โ ๐ฆ). ๐๐ก + ๐#. ([). ๐๐
#
ยจ With: < ๐๐
!. ๐๐
! >= ๐. ๐๐ก
ยจ And : ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น(๐ง)
ยจ ๐# = 0
ยจ ๐# โ โ so ๐ฅ + ๐ฆ โ ๐ง โ 0 so ๐ง ๐ก = ๐ฅ ๐ก + ๐ฆ(๐ก)
ยจ ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น ๐ง = ๐ง ๐ก + ๐(๐ก)
ยจ So again the fact that the IRMA function ๐น ๐ง is affine enforces the Gaussian distribution
24
26. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - V
ยจ Why is called Gaussian by the way ? Looks pretty complicated, how do we know that
distribution is a Gaussian ?
ยจ First of all, letโs look at the equations for ๐ฅ(๐ก) and ๐ฆ(๐ก)
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
ยจ They BOTH individually fit the Langevin equation (or if you are in Finance you call that OU
process, again just ask Ranjit Bhattacharjee, the king of OU models)
ยจ Time for a little refresher on the results of the Langevin deck, using some of those slides
verbatim
26
27. Luc_Faucheux_2021
PDF for the Langevin equation - XVI
ยจ Langevin equation is usually for the particle velocity:
ยจ ๐๐ ๐ก = โ๐. ๐ ๐ก . ๐๐ก + ๐. ([). ๐๐
ยจ With the usual Diffusion coefficient ๐ท =
-!
.
ยจ ๐/ ๐ฃ, ๐ก|๐* 0 , ๐ก = 0 =
0
.12.(*)567 ).08 )
. ๐๐ฅ๐(โ๐
(:);" < .567 )08 )!
.2.(*)567 ).08 )
)
ยจ Remember ๐ ๐ก is the stochastic process
ยจ ๐ฃ is a regular variable
ยจ ๐/ ๐ฃ, ๐ก = ๐๐๐๐๐๐๐๐๐๐ก๐ฆ ๐ โค ๐ฃ, ๐ก = โซ
"=)>
"=:
๐/ ๐ฆ, ๐ก . ๐๐ฆ
ยจ ๐/(๐ฃ, ๐ก) =
?
?:
๐/ ๐ฃ, ๐ก sometimes just noted ๐ ๐ฃ, ๐ก|๐* 0 , ๐ก = 0
27
28. Luc_Faucheux_2021
PDF for the Langevin equation - XVIII
ยจ After much calculation, this is the celebrated Langevin PDF:
ยจ ๐ ๐ฃ, ๐ก|๐* 0 , ๐ก = 0 =
0
.12.(*)567 ).08 )
. ๐๐ฅ๐(โ๐
(:);" < .567 )08 )!
.2.(*)567 ).08 )
)
ยจ SMALL TIME LIMIT
ยจ IF ๐ก โ 0
0
2.(*)567 ).08 )
=
*
.28
+ ๐ ๐ก.
ยจ ๐ ๐ฃ, ๐ก|๐* 0 , ๐ก = 0 โ
*
@128
. ๐๐ฅ๐(โ
(:);" < )!
@28
)
ยจ At short time scales (underdamped regime), the Langevin diffuses as a regular diffusion
process
ยจ ๐๐ ๐ก = โ๐๐ ๐ก . ๐๐ก + ๐. ๐๐ โ ๐. ๐๐
28
29. Luc_Faucheux_2021
PDF for the Langevin equation - XIX
ยจ SMALL ๐ limit
ยจ IF ๐ โ 0
0
2.(*)567 ).08 )
=
*
.28
+ ๐ ๐.
ยจ ๐ ๐ฃ, ๐ก|๐* 0 , ๐ก = 0 โ
*
@128
. ๐๐ฅ๐(โ
(:);" < )!
@28
)
ยจ This is expected since when ๐ โ 0 we should recover the usual diffusion:
ยจ ๐๐ ๐ก = โ๐๐ ๐ก . ๐๐ก + ๐. ๐๐ โ ๐. ๐๐
29
30. Luc_Faucheux_2021
PDF for the Langevin equation - XX
ยจ STEADY STATE LIMIT
ยจ IF ๐ก โ โ
0
2.(*)567 ).08 )
=
0
2
+ ๐ ๐ก)*
ยจ ๐ ๐ฃ, ๐ก|๐* 0 , ๐ก = 0 =
0
.12.(*)567 ).08 )
. ๐๐ฅ๐(โ๐
(:);" < .567 )08 )!
.2.(*)567 ).08 )
)
ยจ ๐ ๐ฃ, ๐ก โ โ|๐* 0 , ๐ก = 0 =
0
.12
. ๐๐ฅ๐(โ๐
:!
.2
)
ยจ This is referred to as the โinvariant Gaussian distributionโ
30
31. Luc_Faucheux_2021
PDF for the Langevin equation - XXI
ยจ In the case where ๐ โ 0, the SDE becomes :
ยจ ๐๐ ๐ก = โ๐๐ ๐ก . ๐๐ก + ๐. ๐๐ = ๐. ๐๐
ยจ And we should recover the usual Brownian diffusion
ยจ ๐* ๐ก = ๐* 0 . exp โ๐๐ก โ ๐* 0
ยจ ๐. ๐ก = ๐. โ + exp โ2๐๐ก . ๐. 0 โ ๐. โ โ ๐. 0
ยจ ๐ ๐ฃ, ๐ก = ๐ ๐ฃ, ๐ก|๐< = ๐* 0 , ๐ก = 0 =
*
.1;! 8
. ๐๐ฅ๐(โ
(:);" 8 )!
.;! 8
)
ยจ ๐ ๐ฃ, ๐ก = ๐ ๐ฃ, ๐ก|๐< = ๐* 0 , ๐ก = 0 =
*
.1;! <
. ๐๐ฅ๐(โ
(:);" < )!
.;! <
)
31
32. Luc_Faucheux_2021
PDF for the Langevin equation - XXII
ยจ ๐ ๐ฃ, ๐ก|๐ ๐กA , ๐กA =
0
.12.(*)B#!$(&#&'))
. ๐๐ฅ๐(โ๐
(:);" 8' .B#$(&#&'))!
.2(*)B#!$(&#&'))
)
ยจ The Langevin process is Gaussian (the PDF can be expressed as a Gaussian function)
ยจ The Langevin process is Markov (the PDF only depends on ๐ ๐กA , ๐กA and not on the entire
history before)
ยจ ๐ ๐ฃ, ๐ก|{๐ ๐ , ๐ โค ๐กA} = ๐ ๐ฃ, ๐ก|๐ ๐กA , ๐กA
ยจ The Langevin process is stationary (only depends on (๐ก โ ๐กA))
ยจ ๐ ๐ฃ, ๐ก + โ|๐ ๐กA + โ = ๐
A, ๐กA + โ = ๐ ๐ฃ, ๐ก|๐
A, ๐กA
ยจ The increments of the Langevin process are NOT independents. Indeed the increments are
not even uncorrelated (as opposed to a Wiener process)
ยจ The correlation function decays as an exponential. In some textbooks they base the
definition of the process on the knowledge of the auto-correlation function, as an
equivalent starting point
32
34. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - VII
ยจ So both ๐ ๐ก and ๐ ๐ก are normally distributed (the Probability distribution function is a
Gaussian).
ยจ Letโs pick one
ยจ ๐C ๐ฅ, ๐ก|๐ ๐กA = ๐ฅA, ๐กA =
0)
.12). *)B#!$). &#&'
. ๐๐ฅ๐(โ๐!.
(!)!'.B#$).(&#&'))!
.2).(*)B#!$).(&#&'))
)
ยจ This has a mean (average, expected value) given by:
ยจ ๐ ๐ ๐ก = ๐ผ ๐ = ๐ผ8
E)
{๐ ๐ก ๐ ๐กA = ๐ฅA. ๐)0). 8)8' = ๐(๐กA). ๐)0).(8)8')
ยจ ๐ ๐ ๐ก = ๐ผ (๐(๐ก) โ ๐ ๐ ๐ก ). = ๐ผ8
E)
{(๐(๐ก) โ ๐ ๐ ๐ก ). ๐ ๐กA
ยจ ๐ ๐ ๐ก =
-)
!
.0)
. 1 โ ๐).0). 8)8'
34
35. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - VIII
ยจ Another way to see that is to start from the SDE:
ยจ ๐๐ = โ๐!. ๐. ๐๐ก + ๐!. ([). ๐๐
! with ๐ท! =
-)
!
.
ยจ We write the SIE using the ITO integral:
ยจ Letโs define j
๐ ๐ก = exp ๐!. ๐ก . ๐(๐ก) and use the ITO lemma
ยจ ๐ j
๐ =
? F
C
?C
. [ . ๐๐ +
*
.
.
?! F
C
?C! . ๐๐. +
? F
C
?8
. [ . ๐๐ก
ยจ
? F
C
?8
= ๐!. ๐ก. ๐ ๐ก
ยจ
? F
C
?C
= exp(๐!. ๐ก)
ยจ
?! F
C
?C! = 0
35
36. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - IX
ยจ Again, remember that we used the notation for ITO lemma for sake of ease, what you really
have is a regular function
ยจ j
๐ = ๐ ๐ Stochastic Variable
ยจ l
๐ฅ = ๐ ๐ฅ Regular โNewtonianโ variable with well defined partial derivatives
ยจ ๐ฟ๐ =
?G
?!
. ๐ฟ๐ +
*
.
.
?!G
?!! . (๐ฟ๐). +
?G
?8
. ๐๐ก
ยจ
?G
?!
=
?G
?!
|!=C 8 ,8
ยจ
?!G
?!! =
?!G
?!! |!=C 8 ,8
ยจ
?G
?8
=
?G
?8
|!=C 8 ,8
36
39. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XII
ยจ ๐ ๐ก = ๐ ๐กA . ๐)0).(8)8') + โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐๐
! ๐ข
ยจ ๐ ๐ ๐ก = ๐ผ ๐ = ๐ผ8
E)
{๐ ๐ก ๐ ๐กA
ยจ ๐ ๐ ๐ก = ๐ผ8
E)
{๐ ๐กA . ๐)0).(8)8') + โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐๐
! ๐ข ๐ ๐กA
ยจ ๐ผ8
E)
{๐ ๐กA . ๐)0). 8)8' ๐ ๐กA = ๐ ๐กA . ๐)0). 8)8'
ยจ ๐ผ8
E)
{โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐๐
! ๐ข ๐ ๐กA = 0 since the ITO integral is a martingale.
ยจ This is at times a supe useful trick, to take the expected value of something that simplifies if
it contains an ITO integral
ยจ Calin book page 195 has a couple of nifty applications of this trick.
ยจ ๐ ๐ ๐ก = ๐ ๐กA . ๐)0). 8)8'
39
40. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XIII
ยจ The variance is a little more complicated but also relies on a nifty little property of the ITO
integral, the isometry property.
ยจ So as a rule:
ยจ Average โ Mean โ Expected Value: use the fact that ITO integrals are martingale
ยจ Second moment โ variance โ standard deviation : use the fact that the ITO integral exhibits
the Isometry property
ยจ ๐ ๐ก = ๐ ๐กA . ๐)0).(8)8') + โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐๐
! ๐ข
ยจ ๐ ๐ ๐ก = ๐ผ (๐(๐ก) โ ๐ ๐ ๐ก ). = ๐ผ8
E)
{(๐(๐ก) โ ๐ ๐ ๐ก ). ๐ ๐กA
ยจ ๐ ๐ ๐ก = ๐ ๐กA . ๐)0). 8)8'
ยจ ๐ ๐ก โ ๐ ๐ ๐ก = โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐๐
! ๐ข
40
45. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XVII
ยจ OK, so we know that both ๐ ๐ก and ๐ ๐ก are NORMALLY distributed (the Probability
Distribution function) is a Gaussian with mean and variance for ๐ ๐ก (and respectively same
for ๐ ๐ก by changing the notation)
ยจ ๐ ๐ ๐ก = ๐(๐กA). ๐)0).(8)8')
ยจ ๐ ๐ ๐ก =
-)
!
.0)
. 1 โ ๐).0). 8)8'
ยจ So that could be an indication that defining:
ยจ ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น ๐ง = ๐ง ๐ก + ๐ ๐ก = ๐ฅ ๐ก + ๐ฆ(๐ก) + ๐ ๐ก
ยจ Could lead to a Gaussian distribution for ๐ ๐ก
ยจ Really to stick to a somewhat consistent notation, we should have used capital letters for the
stochastic processes:
ยจ ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ ๐ก + ๐ ๐ก = ๐น ๐ = ๐ ๐ก + ๐ ๐ก = ๐ ๐ก + ๐(๐ก) + ๐ ๐ก
ยจ Remember that ๐ ๐ก is a deterministic function and not a stochastic process
45
46. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XVIII
ยจ NOW, we also know by now that the sum of a bunch of independent variables that are
normally distributed is also normally distributed
ยจ The mean of the sum is the sum of the means
ยจ The variance of the sum is the sum of the variances
ยจ Suppose that ๐L are a number of independent random variables normally distributed with
respective means ๐L and variances ๐ฃL
ยจ Yeah I know I have been using capital letter ๐L and ๐L for those
ยจ Promised, once I get the book deal and rewrite it I will hire a couple of interns to ensure the
consistency of the notation throughout those decks
ยจ Then the sum ๐ = โ ๐L has:
ยจ Mean: ๐ = โ ๐L
ยจ Variance: ๐ = ๐. = โ ๐L = โ ๐L
.
46
47. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XIX
ยจ Another way that we knew that was in the deck on the diffusion (Stochastic calculus II if I am
not mistaken), when looking at solution of the SDE:
ยจ dX(t)= a(t).dt+b(t).dW
ยจ Letโs refresh our memory with a couple of slides from this deck
47
48. Luc_Faucheux_2021
Sixth simple example โ VIII dX= a(t).dt+b(t).dW
ยจ So we have the mapping:
ยจ ๐๐ ๐ก = ๐ ๐ก . ๐๐ก + ๐(๐ก). ([). ๐๐
ยจ
?
?8
. ๐ ๐ฅ, ๐ก = โ
?
?!
๐(๐ก). ๐(๐ฅ, ๐ก) โ
I 8 !
.
.
?M !,8
?!
= โ
?
?!
๐ฝN + ๐ฝ2
ยจ ๐ฝN = ๐ ๐ก . ๐(๐ฅ, ๐ก) and ๐ฝ2 ๐ฅ, ๐ก = โ๐ท(๐ก).
?M !,8
?!
ยจ Defining:
ยจ z
๐(๐ก).. ๐ก = โซ
K=<
K=8
๐ ๐ .. ๐๐ setting ๐ ๐ก = ๐(๐ก) and ๐ท ๐ก =
-(8)!
.
ยจ Also defining: z
๐ท ๐ก . ๐ก = โซ
K=<
K=8
๐ท(๐ ). ๐๐
ยจ z
๐ท ๐ก is the average diffusion coefficient over time
ยจ z
๐ ๐ก is the average volatility coefficient over time
48
50. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XX
ยจ So this makes sense if you think of the sum of independent normal variables if you index those
variables with time
ยจ ๐๐ ๐ก = ๐ ๐ก . ๐๐ก + ๐(๐ก). ([). ๐๐
ยจ ๐ ๐ก โ ๐(๐กA) = โซ
J=8'
J=8
๐๐(๐ข)
ยจ At each time ๐ก, the little increment ๐๐ ๐ก is picked from a normal distribution with mean
๐ ๐ก . ๐๐ก and variance ๐ ๐ก .. ๐๐ก
ยจ So ๐ ๐ก โ ๐(๐กA) is picked from a normal distribution with:
ยจ ๐ ๐ก = ๐ ๐ก = ๐[๐(๐ก)] = ๐ ๐ ๐ก = ๐ผ ๐ = ๐ผ8
E)
{๐ ๐ก ๐ ๐กA = ๐(๐กA) + โซ
8=8'
8
๐(๐ ). ๐๐
ยจ Variance: ๐ ๐ก = z
๐ ๐ก .. (๐ก โ ๐กA) = โซ
K=8'
K=8
๐ ๐ .. ๐๐ = z
๐ ๐ก .. (๐ก โ ๐กA) = โซ
K=8'
K=8
๐ ๐ .. ๐๐
ยจ ๐ ๐ก = z
2๐ท ๐ก . (๐ก โ ๐กA) = 2 โซ
K=8'
K=8
๐ท(๐ ). ๐๐
50
51. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXI
ยจ At each time ๐ก, the little increment ๐๐ ๐ก is picked from a normal distribution with mean
๐ ๐ก . ๐๐ก and variance ๐ ๐ก .. ๐๐ก
ยจ ๐ ๐ก โ ๐(๐กA) is picked from a normal distribution with:
ยจ Mean: โซ
8=8'
8
๐(๐ ). ๐๐
ยจ Variance: โซ
K=8'
K=8
๐ ๐ .. ๐๐
ยจ So this should not come as a surprise that a sum of normally distributed random variable is
itself normally distributed
ยจ But WAIT a second you should say, the case we are looking at is NOT independent as we
have: < ๐๐
!. ๐๐
! >= ๐. ๐๐ก
51
52. Luc_Faucheux_2021
A sum of independent normally distributed
variables is also normally distributed.
How about when there is correlation?
52
53. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXII
ยจ This is true indeed, so we just need to be a little more careful here.
ยจ Almost there in justifying the use of the term โgaussianโ for those models, so here we go:
ยจ First of all, letโs see what we can say about the mean and the variance, before saying
anything about the functional form of the PDF
ยจ We could also express the equations in a slightly different ways (do a Choleski
decomposition of the Covar matrix).
53
54. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXIII
ยจ In any case, letโs look at the sum of two normally distributed variables that are NOT
independent (have a non-zero correlation)
ยจ ๐ ๐ก = ๐ ๐ก + ๐ ๐ก
ยจ ๐ ๐ ๐ก and ๐ ๐ ๐ก are such that ๐ ๐ก ~๐(๐ ๐ ๐ก , ๐ ๐ ๐ก )
ยจ ๐ ๐ ๐ก and ๐ ๐ ๐ก are such that ๐(๐ก)~๐(๐ ๐ ๐ก , ๐ ๐ ๐ก )
ยจ ๐ ๐ ๐ก = ๐ผ ๐ + ๐ = ๐ผ ๐} + ๐ผ{๐ = ๐ ๐ ๐ก + ๐ ๐ ๐ก
ยจ ๐ ๐ ๐ก = ๐ผ (๐(๐ก) โ ๐ ๐ ๐ก ).
ยจ ๐ ๐ ๐ก = ๐ผ ๐(๐ก). + ๐ ๐ ๐ก . โ 2๐ ๐ ๐ก . ๐(๐ก)
ยจ ๐ ๐ ๐ก = ๐ผ ๐(๐ก).} + ๐ผ{๐ ๐ ๐ก .} โ ๐ผ{2๐ ๐ ๐ก . ๐(๐ก)
ยจ ๐ ๐ ๐ก = ๐ผ ๐(๐ก).} + ๐ ๐ ๐ก . โ 2๐ ๐ ๐ก . ๐ผ{๐(๐ก)
ยจ ๐ ๐ ๐ก = ๐ผ{๐(๐ก).} + ๐ ๐ ๐ก . โ 2๐ ๐ ๐ก . ๐ ๐ ๐ก
54
57. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXVI
ยจ ๐ ๐ ๐ก = ๐ ๐ ๐ก + ๐ ๐ ๐ก + 2. {๐ผ ๐ ๐ก . ๐ ๐ก โ ๐ ๐ ๐ก . ๐ ๐ ๐ก }
ยจ ๐ ๐ ๐ก = ๐ ๐ ๐ก + ๐ ๐ ๐ก + 2. ๐ผ (๐ ๐ก โ ๐ ๐ ๐ก ). (๐ ๐ก โ ๐[๐(๐ก)])
ยจ That is right there the definition of the covariance
ยจ ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก = ๐ผ (๐ ๐ก โ ๐ ๐ ๐ก ). (๐ ๐ก โ ๐[๐(๐ก)])
ยจ ๐ ๐ ๐ก = ๐ ๐ ๐ก + ๐ ๐ ๐ก + 2. ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก
ยจ IF ๐ ๐ก and ๐ ๐ก are independent, the Covariance is 0, and we recover the fact that the
variance of the sum is the sum of the variances
ยจ Note that the reverse is NOT true, you could have non-independent variable that will show a
0 covariance
57
59. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXVIII
ยจ All right, back to:
ยจ ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก = ๐ผ (๐ ๐ก โ ๐ ๐ ๐ก ). (๐ ๐ก โ ๐[๐(๐ก)])
ยจ ๐ ๐ ๐ก = ๐ ๐ ๐ก + ๐ ๐ ๐ก + 2. ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก
ยจ Note that if ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก , then we also have by expanding the first equation:
ยจ ๐ผ{ ๐ ๐ก . ๐ ๐ก = ๐ ๐ ๐ก . ๐ ๐ ๐ก
ยจ So we have shown that the sum of two variables is such that:
ยจ ๐ ๐ก = ๐ ๐ก + ๐ ๐ก
ยจ ๐ ๐ ๐ก = ๐ ๐ ๐ก + ๐ ๐ ๐ก
ยจ ๐ ๐ ๐ก = ๐ ๐ ๐ก + ๐ ๐ ๐ก + 2. ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก
ยจ Note that this is NOT saying that if ๐ ๐ก and ๐ ๐ก are normally distributed, then ๐ ๐ก is also
normally distributed. This requires a little more work but we are almost there.
59
60. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXIX
ยจ I have to admit here that I do not have a super elegant proof that the sum of correlated
Gaussians is also a Gaussian. Turns out the math get a little tricky between marginal and
jointly.
ยจ The best I can sort of do on that one is following deck III of the Stochastic Calculus that we
went over a while back.
ยจ So here it goes for the best I can do:
ยจ You have 2 variables:
ยจ ๐๐ ๐ก = ๐! ๐ก, ๐ ๐ก . ๐๐ก + ๐! ๐ก, ๐ ๐ก . ([). ๐๐
!(๐ก)
ยจ ๐๐ ๐ก = ๐" ๐ก, ๐ ๐ก . ๐๐ก + ๐" ๐ก, ๐ ๐ก . ([). ๐๐
"(๐ก)
ยจ With : < ๐๐
!. ๐๐
" >= ๐. ๐๐ก
60
62. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXI
ยจ The correlation does NOT change the drift
ยจ The correlation does NOT affect the expected return
ยจ That is not surprising, we know that from the MPT theory (mean variance portfolio), where
the correlation between assets will change the risk (variance, volatility, standard deviation),
but NOT the expected return
ยจ Conversely, the drift will NOT change the correlation
ยจ We also know that from the RN (Radon-Nykodym) section with the change of measure,
being an added drift to the Brownian motion.
ยจ The change of measure does NOT change the variance
ยจ The change of measure does NOT change the correlation
ยจ The change of measure changes the drift (expected return, mean, average, advection) but
NOT the diffusion, variance, standard deviation, correlation
62
65. Luc_Faucheux_2021
Introducing the [๐ผ] calculus - II
ยจ The ITO integral is defined as:
ยจ โซ
8=8A
8=8I
๐ ๐ ๐ก . ([). ๐๐(๐ก) = lim
Rโ>
{โ0=*
0=R
๐(๐(๐ก0)). [๐(๐ก0Q*) โ ๐(๐ก0)]}
ยจ The Stratonovitch integral is defined as:
ยจ โซ
8=8A
8=8I
๐ ๐ ๐ก . (โ). ๐๐(๐ก) = lim
Rโ>
{โ0=*
0=R
๐ [๐(๐ก0 + ๐(๐ก0Q*)]/2). [๐(๐ก0Q*) โ ๐(๐ก0)]}
ยจ We can define the [๐ผ] integral as:
ยจ โซ
8=8A
8=8I
๐ ๐ ๐ก . ([๐ผ]). ๐๐(๐ก) = lim
Rโ>
{โ0=*
0=R
๐(๐(๐ก0) + ๐ผ. [๐(๐ก0Q*) โ ๐(๐ก0)]). [๐(๐ก0Q*) โ ๐(๐ก0)]}
ยจ ITO will be the case ๐ผ = 0
ยจ STRATO will be the case ๐ผ = 1/2
65
68. Luc_Faucheux_2021
Introducing the [๐ผ] calculus - V
ยจ For the SDE we had the following mapping between ITO and STRATO
ยจ The ITO SDE:
ยจ ๐๐ ๐ก = ๐ ๐ก, ๐ ๐ก . ๐๐ก + ๐ ๐ก, ๐ ๐ก . ([). ๐๐
ยจ Has the same solution (is the same) as the STRATO SDE in STRATO calculus:
ยจ ๐๐ ๐ก = [๐ ๐ก, ๐ ๐ก โ
*
.
. ๐ ๐ก, ๐ ๐ก .
?
?C
๐ ๐ก, ๐ ๐ก ]. ๐๐ก + ๐ ๐ก, ๐ ๐ก . (โ). ๐๐
ยจ The STRATO SDE
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . (โ). ๐๐
ยจ Has the same solution (is the same) as the ITO SDE in ITO calculus
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก +
*
.
. j
๐ ๐ก, ๐ ๐ก .
?
?C
j
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([). ๐๐
68
69. Luc_Faucheux_2021
Introducing the [๐ผ] calculus - VI
ยจ This now becomes:
ยจ The ITO SDE:
ยจ ๐๐ ๐ก = ๐ ๐ก, ๐ ๐ก . ๐๐ก + ๐ ๐ก, ๐ ๐ก . ([). ๐๐
ยจ Has the same solution (is the same) as the [๐ผ] SDE in [๐ผ] calculus:
ยจ ๐๐ ๐ก = [๐ ๐ก, ๐ ๐ก โ ๐ผ. ๐ ๐ก, ๐ ๐ก .
?
?C
๐ ๐ก, ๐ ๐ก ]. ๐๐ก + ๐ ๐ก, ๐ ๐ก . ([๐ผ]). ๐๐
ยจ The [๐ผ] SDE
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([๐ผ]). ๐๐
ยจ Has the same solution (is the same) as the ITO SDE in ITO calculus
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก + ๐ผ. j
๐ ๐ก, ๐ ๐ก .
?
?C
j
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([). ๐๐
69
70. Luc_Faucheux_2021
Introducing the [๐ผ] calculus - VII
ยจ The ITO lemma (chain rule) reads:
ยจ ๐ ๐ ๐กI โ ๐ ๐ ๐กA = โซ
8=8A
8=8I ?G
?C
. ([). ๐๐(๐ก) +
*
.
โซ
8=8A
8=8I ?!N
?!! (๐ ๐ก ). ๐ ๐ก, ๐ ๐ก
.
๐๐ก
ยจ In the โlimitโ of small time increments, this can be written formally as the Ito lemma:
ยจ ๐ฟ๐ =
?G
?C
. ๐ฟ๐ +
*
.
.
?!N
?!! . ๐.๐ฟ๐ก
ยจ The STRATO lemma reads:
ยจ ๐ ๐ ๐กI โ ๐ ๐ ๐กA = โซ
8=8A
8=8I ?G
?C
. (โ). ๐๐(๐ก)
ยจ In the โlimitโ of small time increments, this can be written formally as the Strato lemma:
ยจ ๐ฟ๐ =
?G
?C
. โ . ๐ฟ๐
70
71. Luc_Faucheux_2021
Introducing the [๐ผ] calculus - VIII
ยจ In the [๐ผ] calculus the [๐ผ] lemma (chain rule) now reads :
ยจ ๐ ๐ ๐กI โ ๐ ๐ ๐กA = โซ
8=8A
8=8I ?G
?C
. ([๐ผ]). ๐๐(๐ก) +
*
.
โ ๐ผ . โซ
8=8A
8=8I ?!N
?!! ๐ ๐ก . ๐ ๐ก, ๐ ๐ก
.
. ๐๐ก
ยจ In the โlimitโ of small time increments, this can be written formally as the [๐ผ] lemma:
ยจ ๐ฟ๐ =
?G
?C
. ๐ฟ๐ +
*
.
โ ๐ผ .
?!N
?!! . ๐.. ๐ฟ๐ก
ยจ NOTE: you can convince yourselves by redoing the derivation we had on pages 55-60
ยจ This actually highlights why STRATO took the middle point ๐ผ = 1/2 , as this is the point
that cancels out the (1/2) coming from the Taylor expansion of ๐ ๐ ๐กI from the left point.
ยจ ๐ ๐ ๐กI โ ๐ ๐ ๐กA = lim
Rโ>
โ0=*
0=R
{๐(๐(๐ก0)) โ ๐(๐(๐ก0)*))}
ยจ ๐ ๐ ๐กI โ ๐ ๐ ๐กA = lim
Rโ>
โ0=*
0=R
{
?G
?C
. ([). ๐ฟ๐ +
*
.
.
?!N
?!! . ([). (๐ฟ๐).}
71
78. Luc_Faucheux_2021
Why did we go through all this trouble?
ยจ Letโs recap:
ยจ ๐๐ ๐ก = ๐ ๐ก, ๐ ๐ก . ๐๐ก + ๐ ๐ก, ๐ ๐ก . ([). ๐๐ in ITO calculus
ยจ We have shown that in that case the PDF follows a FORWARD ITO Kolmogorov (FP)
ยจ
?M(!,8|C',8')
?8
= โ
?
?C
๐ ๐ ๐ก , ๐ก . ๐ ๐ฅ, ๐ก ๐A, ๐กA โ
?
?C
[
*
.
. [๐(๐ ๐ก , ๐ก). . ๐(๐ฅ, ๐ก|๐A, ๐กA)]
ยจ < โ๐ > = ๐ธ โ๐ =< ๐ฅ >8Qโ8 โ< ๐ฅ >8= ๐ ๐ ๐ก , ๐ก . โ๐ก (advection term)
ยจ < โ๐.> = ๐ธ โ๐. =< (๐ฅโ< ๐ฅ >8Qโ8).>8Qโ8= ๐(๐ ๐ก , ๐ก).. โ๐ก (diffusion term)
78
79. Luc_Faucheux_2021
Why did we go through all this trouble? - II
ยจ We ALSO know that going between ITO and [๐ผ]:
ยจ The ITO SDE:
ยจ ๐๐ ๐ก = ๐ ๐ก, ๐ ๐ก . ๐๐ก + ๐ ๐ก, ๐ ๐ก . ([). ๐๐
ยจ Has the same solution (is the same) as the [๐ผ] SDE in [๐ผ] calculus:
ยจ ๐๐ ๐ก = [๐ ๐ก, ๐ ๐ก โ ๐ผ. ๐ ๐ก, ๐ ๐ก .
?
?C
๐ ๐ก, ๐ ๐ก ]. ๐๐ก + ๐ ๐ก, ๐ ๐ก . ([๐ผ]). ๐๐
ยจ The [๐ผ] SDE
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([๐ผ]). ๐๐
ยจ Has the same solution (is the same) as the ITO SDE in ITO calculus
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก + ๐ผ. j
๐ ๐ก, ๐ ๐ก .
?
?C
j
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([). ๐๐
79
80. Luc_Faucheux_2021
Why did we go through all this trouble? - III
ยจ And so, if we start with an [๐ผ] SDE: ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([๐ผ]). ๐๐
ยจ It has the same solution (is the same) as the ITO SDE in ITO calculus
ยจ ๐๐ ๐ก = l
๐ ๐ก, ๐ ๐ก + ๐ผ. j
๐ ๐ก, ๐ ๐ก .
?
?C
j
๐ ๐ก, ๐ ๐ก . ๐๐ก + j
๐ ๐ก, ๐ ๐ก . ([). ๐๐
ยจ Which will then follow the ITO FORWARD Kolmogorov (FP):
ยจ
?V(!,8|C',8')
?8
= โ
?
?C
๐ ๐ ๐ก , ๐ก . ๐ ๐ฅ, ๐ก ๐A, ๐กA โ
?
?C
[
*
.
. [๐(๐ ๐ก , ๐ก). . ๐(๐ฅ, ๐ก|๐A, ๐กA)]
ยจ With:
ยจ ๐ ๐ ๐ก , ๐ก = l
๐ ๐ก, ๐ ๐ก + ๐ผ. j
๐ ๐ก, ๐ ๐ก .
?
?C
j
๐ ๐ก, ๐ ๐ก
ยจ ๐ ๐ ๐ก , ๐ก = j
๐ ๐ก, ๐ ๐ก
ยจ ๐ ๐ ๐ก , ๐ก = l
๐ ๐ก, ๐ ๐ก + ๐ผ. ๐ ๐ก, ๐ ๐ก .
?
?C
๐ ๐ก, ๐ ๐ก
80
83. Luc_Faucheux_2021
Why did we go through all this trouble? โ III - c
ยจ ITO : < j
๐ ๐ก, ๐ ๐ก . [ . ๐๐ > = 0
ยจ One can also go back to the definition of the ITO integral (because remember it is never a
SDE, it is ALWAYS and SIE) , but essentially the convention [ of taking the value โbefore the
jumpโ, implies that the ITO integral is a martingale of expected value 0
ยจ STRATO: < j
๐ ๐ก, ๐ ๐ก . โ . ๐๐ > = [
*
.
]. j
๐ ๐ก, ๐ ๐ก .
?
?C
j
๐ ๐ก, ๐ ๐ก . โ๐ก
ยจ Again we can explicitly derive this from the integral, but the convention โ implies taking
the value โin the middle of the jumpโ, hence the STRATO integral CANNOT be a martingale
and has a non zero expected value.
ยจ We did this derivation when we looking at the correspondence between the ITO and
STRATO.
83
87. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXVI
ยจ ๐๐ ๐ก = ๐# ๐ก, ๐ ๐ก . ๐๐ก + ๐# ๐ก, ๐ ๐ก . ([). ๐๐
#(๐ก)
ยจ ๐# ๐ก, ๐ ๐ก = ๐! ๐ก, ๐ ๐ก + ๐" ๐ก, ๐ ๐ก
ยจ ๐# = ๐! + ๐"
ยจ ๐#
.
= (๐!
.
+ ๐"
.
+ 2. ๐. ๐!. ๐")
ยจ So we know that the PDF for that process will follow the FP equation
ยจ This implies that the PDF follows the FORWARD ITO Kolmogorov PDE
ยจ
?M(#,8|X',8')
?8
= โ
?
?X
๐ ๐ ๐ก , ๐ก . ๐ ๐ง, ๐ก ๐A, ๐กA โ
?
?X
[
*
.
. [๐(๐ ๐ก , ๐ก). . ๐(๐ง, ๐ก|๐A, ๐กA)]
ยจ The PDF ALSO follows the BACKWARD ITO Kolmogorov PDE:
ยจ
?M(#,8|X',8')
?8'
= โ๐ ๐A, ๐กA
?
?X'
๐ ๐ง, ๐ก ๐A, ๐กA โ
*
.
. ๐(๐A, ๐กA). ?!
?X'
! ๐(๐ง, ๐ก|๐A, ๐กA)
87
88. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXVII
ยจ ๐๐ ๐ก = ๐# ๐ก, ๐ ๐ก . ๐๐ก + ๐# ๐ก, ๐ ๐ก . ([). ๐๐
#(๐ก)
ยจ ๐# = ๐! + ๐"
ยจ ๐#
.
= (๐!
.
+ ๐"
.
+ 2. ๐. ๐!. ๐")
ยจ
?M(#,8|X',8')
?8
= โ
?
?X
๐ ๐ ๐ก , ๐ก . ๐ ๐ง, ๐ก ๐A, ๐กA โ
?
?X
[
*
.
. [๐(๐ ๐ก , ๐ก). . ๐(๐ง, ๐ก|๐A, ๐กA)]
ยจ In almost all cases (and certainly in the simple cases where say the advection and diffusion
coefficients are either constant or function only of the time), the solution of that PDE will be
a Gaussian.
ยจ Am sorry guys, but this is how I explain that the sum of correlated Gaussians is itself a
Gaussian
ยจ There could be some weird functions ๐!, ๐", ๐! and ๐" for which that would not be the
case.
ยจ Usually if you formulated a model with such behavior, switch to a simpler one
88
89. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXVIII
ยจ I truly wished that I could have been more rigorous in that section, and that keeps me up at
night, so apologies for what I perceived to be a cope out, I would be happy if one of you
email me an insulting letter with a more rigorous approach to this (and also more general)
89
90. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXIX
ยจ Saying it in another way, I do not have a good argument as to why:
ยจ If ๐!, ๐", ๐! and ๐" are such functions so that the PDF for ๐(๐ก) and ๐(๐ก) are such that they
have for solutions a Gaussian distribution
ยจ THEN the PDF for ๐ ๐ก + ๐(๐ก) which is of the form:
ยจ
?M(#,8|X',8')
?8
= โ
?
?X
๐ ๐ ๐ก , ๐ก . ๐ ๐ง, ๐ก ๐A, ๐กA โ
?
?X
[
*
.
. [๐(๐ ๐ก , ๐ก). . ๐(๐ง, ๐ก|๐A, ๐กA)]
ยจ With:
ยจ ๐# = ๐! + ๐"
ยจ ๐#
.
= (๐!
.
+ ๐"
.
+ 2. ๐. ๐!. ๐")
ยจ ALSO has a Gaussian for solution of the PDF
ยจ At least not obvious to me
90
91. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXX
ยจ In some simple cases (like the ones we are dealing with here), the best way to look at it is if
you can find an explicit solution of the SDE:
ยจ When we had:
ยจ ๐๐ = โ๐!. ๐. ๐๐ก + ๐!. ([). ๐๐
!
ยจ We had for the solution:
ยจ ๐ ๐ก = ๐ ๐กA . ๐)0).(8)8') + โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐๐
! ๐ข
ยจ Which was normally distributed with moments:
ยจ ๐ ๐ ๐ก = ๐(๐กA). ๐)0).(8)8')
ยจ ๐ ๐ ๐ก =
-)
!
.0)
. 1 โ ๐).0). 8)8'
91
92. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXXI
ยจ Similarly we had for ๐(๐ก):
ยจ ๐๐ = โ๐". ๐. ๐๐ก + ๐". ([). ๐๐
"
ยจ We had for the solution:
ยจ ๐ ๐ก = ๐ ๐กA . ๐)0+.(8)8')
+ โซ
J=8'
J=8
๐)0+. 8)J
. ๐". ([). ๐๐
" ๐ข
ยจ Which was normally distributed with moments:
ยจ ๐ ๐ ๐ก = ๐(๐กA). ๐)0+.(8)8')
ยจ ๐ ๐ ๐ก =
-+
!
.0+
. 1 โ ๐).0+. 8)8'
92
93. Luc_Faucheux_2021
IRMA โ Mercurio โ G2++ - XXXXII
ยจ So in that case we get an explicit solution for ๐ ๐ก = ๐ ๐ก + ๐(๐ก)
ยจ ๐(๐ก) = ๐ ๐ก! . ๐"#!.(&"&")
+ โซ
()&"
()&
๐"#!. &"(
. ๐*. ([). ๐๐
* ๐ข + ๐ ๐ก! . ๐"##.(&"&") + โซ
()&"
()&
๐"##. &"( . ๐+. ([). ๐๐
+ ๐ข
ยจ And we know that any function of the form:
ยจ โซ
J=8'
J=8
๐(๐ข). ([). ๐๐
" ๐ข is normally distributed, because of the martingale and isometry
properties of the ITO integral
ยจ So ๐(๐ก) is a mixture of normal distributions, which is normally distributed (because you can
recast the correlation using the Cholesky decomposition into two independent normal
distribution)
ยจ So in that case you can say that ๐(๐ก) is normally distributed.
ยจ Again this is in the specific case of the Langevin equation.
ยจ I unfortunately do not have a solid general argument otherwise
93
95. Luc_Faucheux_2021
Recasting the correlation - I
ยจ The Mercurio G2++ model was written as:
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
ยจ ๐๐ง = โ๐#. (๐ง โ ๐ฅ โ ๐ฆ). ๐๐ก + ๐#. ([). ๐๐
#
ยจ With: < ๐๐
!. ๐๐
" >= ๐. ๐๐ก
ยจ ๐# = 0
ยจ ๐# โ โ so ๐ฅ + ๐ฆ โ ๐ง โ 0 so ๐ง ๐ก = ๐ฅ ๐ก + ๐ฆ(๐ก)
ยจ ๐ ๐ก = ๐ผ๐ ๐๐ด ๐ง ๐ก + ๐ ๐ก = ๐น ๐ง = ๐ง ๐ก + ๐(๐ก)
ยจ Sometimes it is not that convenient to work with: < ๐๐
!. ๐๐
" >= ๐. ๐๐ก
ยจ We would rather work with picking from the Normal distribution in a way where we do not
have to take the correlation into account. We can do that in the following manner:
95
96. Luc_Faucheux_2021
Recasting the correlation - II
ยจ Instead of writing :
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ([). ๐๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". ([). ๐๐
"
ยจ With: < ๐๐
!. ๐๐
" >= ๐. ๐๐ก
ยจ Let us show that can write the above using 2 other independent Brownian motions:
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ๐ ห
๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". {๐. ๐ ห
๐
! + 1 โ ๐.. ๐ ห
๐
"}
ยจ With: < ๐ ห
๐
!. ๐ ห
๐
" > = 0
96
102. Luc_Faucheux_2021
Recasting the correlation - VIII
ยจ All right so now letโs redo that exercise starting from:
ยจ ๐๐ฅ = โ๐!. ๐ฅ. ๐๐ก + ๐!. ๐ ห
๐
!
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". {๐. ๐ ห
๐
! + 1 โ ๐.. ๐ ห
๐
"}
ยจ With: < ๐ ห
๐
!. ๐ ห
๐
" > = 0
ยจ The first one for ๐(๐ก) is formally the same as before:
ยจ ๐ ๐ก = ๐ ๐กA . ๐)0).(8)8') + โซ
J=8'
J=8
๐)0). 8)J . ๐!. ([). ๐ ห
๐
! ๐ข
ยจ Which was normally distributed with moments:
ยจ ๐ ๐ ๐ก = ๐(๐กA). ๐)0).(8)8')
ยจ ๐ ๐ ๐ก =
-)
!
.0)
. 1 โ ๐).0). 8)8'
102
103. Luc_Faucheux_2021
Recasting the correlation - IX
ยจ The second one for ๐ ๐ก is slightly more complicated:
ยจ ๐๐ฆ = โ๐". ๐ฆ. ๐๐ก + ๐". {๐. ๐ ห
๐
! + 1 โ ๐.. ๐ ห
๐
"}
ยจ Again now letโs try to be consistent in our notations (again, this is a promise, once I get that
book deal the notation will be nicely consistent throughout the book)
ยจ ๐๐(๐ก) = โ๐". ๐(๐ก). ๐๐ก + ๐". {๐. ๐ ห
๐
!(๐ก) + 1 โ ๐.. ๐ ห
๐
"(๐ก)}
ยจ Again as before we are going to use ITO lemma on :
ยจ j
๐ ๐ก = exp ๐". ๐ก . ๐(๐ก)
ยจ ๐ j
๐ =
? F
D
?D
. [ . ๐๐ +
*
.
.
?! F
D
?D! . ๐๐. +
? F
D
?8
. [ . ๐๐ก
103
104. Luc_Faucheux_2021
Recasting the correlation - XI
ยจ Again, remember that we used the notation for ITO lemma for sake of ease, what you really
have is a regular function
ยจ j
๐ = ๐ ๐ Stochastic Variable
ยจ l
๐ฆ = ๐ ๐ฆ Regular โNewtonianโ variable with well defined partial derivatives
ยจ ๐ฟ๐ =
?G
?"
. ๐ฟ๐ +
*
.
.
?!G
?"! . (๐ฟ๐). +
?G
?8
. ๐๐ก
ยจ
?G
?"
=
?G
?"
|"=D 8 ,8
ยจ
?!G
?"! =
?!G
?"! |"=D 8 ,8
ยจ
?G
?8
=
?G
?8
|"=D 8 ,8
104
105. Luc_Faucheux_2021
Recasting the correlation - XII
ยจ ๐๐(๐ก) = โ๐". ๐(๐ก). ๐๐ก + ๐". {๐. ๐ ห
๐
!(๐ก) + 1 โ ๐.. ๐ ห
๐
"(๐ก)}
ยจ j
๐ ๐ก = exp ๐". ๐ก . ๐(๐ก)
ยจ ๐ j
๐ =
? F
D
?D
. [ . ๐๐ +
*
.
.
?! F
D
?D! . ๐๐. +
? F
D
?8
. [ . ๐๐ก
ยจ
? F
D
?D
= exp ๐". ๐ก
ยจ
?! F
D
?D! = 0
ยจ
? F
D
?8
= ๐". exp ๐". ๐ก . ๐ ๐ก = ๐".j
๐ ๐ก
ยจ ๐ j
๐ = exp ๐". ๐ก . [ . ๐๐ +
*
.
. 0. ๐๐. + ๐".j
๐ ๐ก . [ . ๐๐ก
105
114. Luc_Faucheux_2021
Recasting the correlation - XXI
ยจ So far we have recovered the same expression for the mean and Variance for both ๐ ๐ก and
also ๐(๐ก)
ยจ We are now left with the Covariance
ยจ ๐ถ๐๐ ๐ ๐ก , ๐ ๐ก = ๐ผ (๐ ๐ก โ ๐ ๐ ๐ก ). (๐ ๐ก โ ๐[๐(๐ก)])
ยจ All right we are almost there, and since we know that both ๐ ๐ก and ๐(๐ก) are normally
distributed, if we can prove that we also recover the Covariance then both descriptions are
identical. That will be quite an achievement (again might seem obvious if you are not too
concerned about being somewhat rigorous, or at least trying to be).
114
120. Luc_Faucheux_2021
Recasting the correlation - XXVII
ยจ This is actually seen quite often in the literature (Mercurio p.134 for example), and is usually
useful when you want to deal with independent Brownian motions, and do not want to have
to deal with the correlation when โpickingโ the stochastic process out of the distribution.
120
136. Luc_Faucheux_2021
Recasting the correlation instantaneous - X
ยจ So we do recover the same expressions in both sets of equations for the quantities:
ยจ < ๐๐ ๐ก . ๐๐(๐ก) >
ยจ < ๐๐ ๐ก . ๐๐(๐ก) >
ยจ < ๐๐ ๐ก . ๐๐(๐ก) >
ยจ < ๐๐ ๐ก >
ยจ < ๐๐(๐ก) >
ยจ Following the logic of the deck on stochastic calculus, we will also recover the same PDE
(Partial Differential equations) for the PDF (Probability Distribution Function) of the
processes that we described through the SDE (stochastic Differential Equations).
ยจ Hence the 2 descriptions are equivalent
136
139. Luc_Faucheux_2021
IRMA โ Tuckman โ Gauss+ II
ยจ Bruce Tuckman worked at Salomon on the โ2+ IRMAโ model with other intellectual giants
like Craig Fithian, Francis Longstaff and other too numerous to name here.
ยจ SO he knows what he is talking about.
ยจ His book is awesome to read.
ยจ He expresses the model on page 288 as:
ยจ ๐๐(๐ก) = โ๐ผ;. (๐(๐ก) โ ๐(๐ก)). ๐๐ก + ๐;. (๐. ๐๐*(๐ก) + 1 โ ๐.. ๐๐.(๐ก))
ยจ ๐๐(๐ก) = โ๐ผh. (๐(๐ก) โ ๐). ๐๐ก + ๐h. ๐๐*(๐ก)
ยจ ๐๐(๐ก) = โ๐ผ,. (๐(๐ก) โ ๐(๐ก)). ๐๐ก
ยจ With: < ๐๐*(๐ก). ๐๐.(๐ก) > = 0
ยจ Takes a little work to show that this can be casted into the 2+ IRMA formalism, but letโs do it.
ยจ Itโs worth it, plus I got let go of my job at Natixis, so I need to get my bearings backโฆ.J
139
144. Luc_Faucheux_2021
IRMA โ Tuckman โ Gauss+ VII
ยจ ๐๐ฅ(๐ก) = โ๐!. ๐ฅ(๐ก). ๐๐ก + ๐!. ๐๐
!(๐ก)
ยจ ๐๐ฆ(๐ก) = โ๐". (๐ฆ ๐ก โ ๐ฅ ๐ก + ๐). ๐๐ก + ๐". ๐๐
"(๐ก)
ยจ ๐๐ง ๐ก = โ๐#. ๐ง ๐ก โ ๐ฆ ๐ก . ๐๐ก + ๐#. ๐๐
#(๐ก)
ยจ With: < ๐๐
! ๐ก . ๐๐
" ๐ก > = ๐. ๐๐ก
ยจ ๐(๐ก) = ๐ง(๐ก)
ยจ This one has the โcascadeโ form because first you have ๐ฅ, then (๐ฆ โ ๐ฅ) then (๐ง โ ๐ฆ)
ยจ We can use matrices like Tuckman does in the appendix or just โcascadeโ the change of
variables
ยจ l
๐ฅ = ๐ฅ
ยจ l
๐ฆ = ๐!. ๐ฆ โ ๐". ๐ฅ
144
145. Luc_Faucheux_2021
IRMA โ Tuckman โ Gauss+ VIII
ยจ ๐๐ฅ(๐ก) = โ๐!. ๐ฅ(๐ก). ๐๐ก + ๐!. ๐๐
!(๐ก)
ยจ ๐๐ฆ(๐ก) = โ๐". (๐ฆ ๐ก โ ๐ฅ ๐ก + ๐). ๐๐ก + ๐". ๐๐
"(๐ก)
ยจ l
๐ฅ = ๐ผ!,!. ๐ฅ + ๐ผ!,". ๐ฆ
ยจ l
๐ฆ = ๐ผ",!. ๐ฅ + ๐ผ",". ๐ฆ
ยจ And then we will solve for the variables ๐ผ!,! to reduce the mean reversion to be diagonal
ยจ ๐ l
๐ฅ = ๐ผ!,!. ๐๐ฅ + ๐ผ!,". ๐๐ฆ
ยจ ๐ l
๐ฆ = ๐ผ",!. ๐๐ฅ + ๐ผ",". ๐๐ฆ
ยจ ๐ l
๐ฅ = ๐ผ!,!. (โ๐!. ๐ฅ(๐ก). ๐๐ก + ๐!. ๐๐
!(๐ก)) + ๐ผ!,". (โ๐". (๐ฆ ๐ก โ ๐ฅ ๐ก + ๐). ๐๐ก + ๐". ๐๐
"(๐ก))
ยจ ๐ l
๐ฆ = ๐ผ",!. (โ๐!. ๐ฅ(๐ก). ๐๐ก + ๐!. ๐๐
!(๐ก)) + ๐ผ",". (โ๐". (๐ฆ ๐ก โ ๐ฅ ๐ก + ๐). ๐๐ก + ๐". ๐๐
"(๐ก))
145
146. Luc_Faucheux_2021
IRMA โ Tuckman โ Gauss+ IX
ยจ ๐ l
๐ฅ = ๐ผ!,!. (โ๐!. ๐ฅ(๐ก). ๐๐ก + ๐!. ๐๐
!(๐ก)) + ๐ผ!,". (โ๐". (๐ฆ ๐ก โ ๐ฅ ๐ก + ๐). ๐๐ก + ๐". ๐๐
"(๐ก))
ยจ ๐ l
๐ฆ = ๐ผ",!. (โ๐!. ๐ฅ(๐ก). ๐๐ก + ๐!. ๐๐
!(๐ก)) + ๐ผ",". (โ๐". (๐ฆ ๐ก โ ๐ฅ ๐ก + ๐). ๐๐ก + ๐". ๐๐
"(๐ก))
ยจ And then we replace ๐ฅ and ๐ฆ by l
๐ฅ and l
๐ฆ, which is a matrix inversion.
ยจ l
๐ฅ = ๐ผ!,!. ๐ฅ + ๐ผ!,". ๐ฆ
ยจ l
๐ฆ = ๐ผ",!. ๐ฅ + ๐ผ",". ๐ฆ
ยจ ๐ผ",! . l
๐ฅ = ๐ผ",!. ๐ผ!,!. ๐ฅ + ๐ผ",!. ๐ผ!,". ๐ฆ
ยจ ๐ผ!,!. l
๐ฆ = ๐ผ!,!. ๐ผ",!. ๐ฅ + ๐ผ!,!. ๐ผ",". ๐ฆ
ยจ ๐ผ",! . l
๐ฅ โ ๐ผ!,!. l
๐ฆ = ๐ผ!,!. ๐ผ",!. ๐ฅ โ ๐ผ!,!. ๐ผ",!. ๐ฅ + (๐ผ",!. ๐ผ!," โ ๐ผ!,!. ๐ผ","). ๐ฆ
ยจ ๐ผ",! . l
๐ฅ โ ๐ผ!,!. l
๐ฆ = ๐ผ",!. ๐ผ!," โ ๐ผ!,!. ๐ผ","). ๐ฆ
146