SlideShare a Scribd company logo
1 of 29
Download to read offline
Simplified Runtime Analysis of
Estimation of Distribution Algorithms
Duc-Cuong Dang Per Kristian Lehre
University of Nottingham
United Kingdom
Madrid, Spain
July, 11-15th 2015
Outline
Background
Previous Runtime Analyses of EDAs
Univariate Marginal Distribution Algorithm (UMDA)
Our Tool - Level-based Analysis
Non-elitist Processes
Level-based Theorem
Our Results
Warm up: LeadingOnes
Onemax and Feige’s Inequality
Conclusion
Runtime Analysis of EAs
Analysis of the time the EA requires to optimise a function f:
expected number of fitness evaluations
expressed asymptotically wrt instance size n
dependence on
characteristics of the problem
parameter settings of the EA
Previous Work
(1+1) EA cGA1
Onemax Θ(n log n) Θ(K
√
n) [Droste, 2006] EDA worse
Linear functions Θ(n log n) Ω(Kn) [Droste, 2006] EDA worse
(µ+1) EA cGA2
Onemax + N(0, σ2) superpoly(n) whp O(Kσ2√
n log Kn) [Friedrich et al., 2015] EDA better
(1+1) EA UMDA
LeadingOnes Θ(n2) O(λn), λ = ω(n2) [Chen et al., 2007]
BVLeadingOnes Θ(n2) ∞ w.o.p. [Chen et al., 2010] w/o margins
O(λn), λ = ω(n2) [Chen et al., 2010]
SubString 2Ω(n) w.o.p. O(λn), λ = ω(n2) [Chen et al., 2009] EDA better
1
K = n1/2+ε
2
K = ω(σ2√
n log n)
3
λ = Ω(log n)
Previous Work
(1+1) EA cGA1
Onemax Θ(n log n) Θ(K
√
n) [Droste, 2006] EDA worse
Linear functions Θ(n log n) Ω(Kn) [Droste, 2006] EDA worse
(µ+1) EA cGA2
Onemax + N(0, σ2) superpoly(n) whp O(Kσ2√
n log Kn) [Friedrich et al., 2015] EDA better
(1+1) EA UMDA
LeadingOnes Θ(n2) O(λn), λ = ω(n2) [Chen et al., 2007]
BVLeadingOnes Θ(n2) ∞ w.o.p. [Chen et al., 2010] w/o margins
O(λn), λ = ω(n2) [Chen et al., 2010]
SubString 2Ω(n) w.o.p. O(λn), λ = ω(n2) [Chen et al., 2009] EDA better
LeadingOnes Θ(n2) O(nλ log λ + n2) this paper3
Onemax Θ(n log n) O(nλ log λ) this paper
1
K = n1/2+ε
2
K = ω(σ2√
n log n)
3
λ = Ω(log n)
Univariate Marginal Distribution Algorithm
1: Initialise the vector p0 := (1/2, . . . , 1/2).
2: for t = 0, 1, 2, . . . do
3: Sample λ bitstrings y1, . . . , yλ according to the distribution
pt(x) =
n
i=1
pt(i)xi
(1 − pt(i))1−xi
4: Let y(1), . . . , y(λ) be the bitstrings sorted by fitness func. f
5: Compute the next vector pt+1 according to
pt+1(i) :=
Xi
µ
where Xi :=
µ
j=1
y
(j)
i
6: end for
Univariate Marginal Distribution Algorithm
1: Initialise the vector p0 := (1/2, . . . , 1/2).
2: for t = 0, 1, 2, . . . do
3: Sample λ bitstrings y1, . . . , yλ according to the distribution
pt(x) =
n
i=1
pt(i)xi
(1 − pt(i))1−xi
4: Let y(1), . . . , y(λ) be the bitstrings sorted by fitness func. f
5: Compute the next vector pt+1 according to
pt+1(i) :=



1
n if Xi = 0
Xi
µ if 1 ≤ Xi ≤ µ − 1
1 − 1
n if Xi = µ,
where Xi := µ
j=1 y
(j)
i .
6: end for
UMDA on Onemax (n = 5000)
0 10 20 30 40 50
0.00.20.40.60.81.0
iteration
Non-elitist populations
X
Pt+1 = (y1
, y2
, . . . , yλ
)
where yi
∼ D(Pt)
Non-elitist populations
X
Pt+1 = (y1
, y2
, . . . , yλ
)
where yi
∼ D(Pt)
Example (UMDA)
D(P)(x) :=
n
i=1
pxi
i (1 − pi)1−xi
Level-Based Theorem
D(Pt)
AmA1 Aj Aj+1 · · ·
Pt
γ0λ γλ
X = A1 ⊃ A2 ⊃ · · · ⊃ Am−1 ⊃ Am = A
Level-Based Theorem
≥ γ(1 + δ)
≥ zj
D(Pt)
A+
mA+
1
A+
j A+
j+1 · · ·
Pt
γ0λ γλ
Theorem (Corus, Dang, Ereemeev, Lehre (2014))
If for any level j < m and population P where
|P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ
an individual y ∼ D(P) is in Aj+1 with
Pr (y ∈ Aj+1) ≥
γ(1 + δ) if γ > 0
zj if γ = 0
and the population size λ is at least
λ = Ω ln(m/(δzj))/δ2
then level Am is reached in expected time
O

 1
δ5

m ln λ +
m
j=1
1
λzj



 .
LeadingOnes
LeadingOnes(x) :=
n
i=1
i
j=1
xi
LeadingOnes
LeadingOnes(x) :=
n
i=1
i
j=1
xi
Theorem
The expected optimisation time of UMDA with
λ ≥ b ln(n) for some constant b > 0,
λ > (1 + δ)eµ
on LeadingOnes is
O(nλ ln(λ) + n2
).
Proof idea
Level definition x ∈ Aj ⇐⇒ LeadingOnes(x) ≥ j
If |P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ > 0
then Pr(y ∈ Aj+1) ≥ γ(1+δ)
Proof idea
11111111111111111111********
11111111111111111111********
11111111111111111111********
11111111111111111110********
11111111111111111110********
11111111111111111110********
11111111111111111110********
****************************
****************************
****************************
Level definition x ∈ Aj ⇐⇒ LeadingOnes(x) ≥ j
If |P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ > 0
then Pr(y ∈ Aj+1) ≥ γ(1+δ)
Proof idea
11111111111111111111********
11111111111111111111********
11111111111111111111********
11111111111111111110********
11111111111111111110********
11111111111111111110********
11111111111111111110********
****************************
****************************
****************************
Level definition x ∈ Aj ⇐⇒ LeadingOnes(x) ≥ j
If |P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ > 0
then Pr(y ∈ Aj+1) = j+1
i=1 pi ≥ 1 − 1
n
j γλ
µ ≥ γλ
eµ ≥ γ(1+δ)
Onemax
Onemax(x) =
n
i=1
xi
Onemax
Onemax(x) =
n
i=1
xi
Theorem
The expected optimisation time of UMDA with
λ ≥ b ln(n) for some constant b > 0,
µ < min{λ/(13e), n}
on Onemax is
O(nλ ln λ).
Proof idea (ignoring margins)
Recall definition of UMDA
Probability for i-th position (assuming within margins)
pi :=
Xi
µ
where Xi :=
µ
j=1
y
(j)
i
Proof idea (ignoring margins)
Recall definition of UMDA
Probability for i-th position (assuming within margins)
pi :=
Xi
µ
where Xi :=
µ
j=1
y
(j)
i
Definition of levels and a first observation
Choosing levels x ∈ Aj ⇐⇒ Onemax(x) ≥ j, need to show
|P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ (1)
=⇒ Pr (Y ∈ Aj+1) ≥ γ(1 + δ) (2)
Note that assumption (1) with γ0 := µ/λ implies
n
i=1
Xi ≥ µj + γλ
Proof idea (taking into account margins)
Proof idea (taking into account margins)
Pr (Y ∈ Aj+1) ≥ Pr Y1,k >
γλ
µ
+ j − · Pr (Yk+1,k+ +1 = )
≥ Pr Y1,k > E [Y1,k] −
γλ
12µ
· 1 −
1
n
Feige’s Inequality
i E [Yi]
Theorem
Given n independent r.v. Y1, . . . , Yn ∈ [0, 1], then for all δ > 0
Pr
n
i=1
Yi >
n
i=1
E [Yi] − δ ≥ min
1
13
,
δ
1 + δ
Proof idea
Pr (Y ∈ Aj+1) ≥ Pr Y1,k >
γλ
µ
+ j − · Pr (Yk+1,k+ +1 = µ )
≥ Pr Y1,k > E [Y1,k] −
γλ
12µ
· 1 −
1
n
≥ min
1
13
,
γλ
12µ
γλ
12µ + 1
·
1
e
≥
γλ
13eµ
Proof idea
Pr (Y ∈ Aj+1) ≥ Pr Y1,k >
γλ
µ
+ j − · Pr (Yk+1,k+ +1 = µ )
≥ Pr Y1,k > E [Y1,k] −
γλ
12µ
· 1 −
1
n
≥ min
1
13
,
γλ
12µ
γλ
12µ + 1
·
1
e
≥
γλ
13eµ
≥ γ(1 + δ) if λ ≥ 13e(1 + δ)µ
Conclusion and Future Work
The recent level-based method seems well suited for EDAs
Straightforward runtime analysis of the UMDA
Trivial analysis of LeadingOnes,
smaller populations suffice, i.e., O(ln n) vs ω(n2
)
First upper bound on Onemax
How tight are the upper bounds?
o(n ln n) on Onemax?
Other problems and algorithms
linear functions
multi-variate EDAs
Thank you
The research leading to these results has received funding from the
European Union Seventh Framework Programme (FP7/2007-2013)
under grant agreement no. 618091 (SAGE).
References
Chen, T., Lehre, P. K., Tang, K., and Yao, X. (2009).
When is an estimation of distribution algorithm better than an evolutionary
algorithm?
In Proceedings of the 10th IEEE Congress on Evolutionary Computation
(CEC 2009), pages 1470–1477. IEEE.
Chen, T., Tang, K., Chen, G., and Yao, X. (2007).
On the analysis of average time complexity of estimation of distribution
algorithms.
In Proceedings of 2007 IEEE Congress on Evolutionary Computation (CEC’07),
pages 453–460.
Chen, T., Tang, K., Chen, G., and Yao, X. (2010).
Analysis of computational time of simple estimation of distribution algorithms.
IEEE Trans. Evolutionary Computation, 14(1):1–22.
Droste, S. (2006).
A rigorous analysis of the compact genetic algorithm for linear functions.
Natural Computing, 5(3):257–283.
Friedrich, T., K¨otzing, T., Krejca, M. S., and Sutton, A. M. (2015).
The benefit of sex in noisy evolutionary search.
CoRR, abs/1502.02793.

More Related Content

What's hot

A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsFrancesco Tudisco
 
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Gabriel Peyré
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionCharles Deledalle
 
22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_final22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_finalvibhuti bansal
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Francesco Tudisco
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportGabriel Peyré
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsAlexander Litvinenko
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationGabriel Peyré
 
Gibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inferenceGibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inferenceJeremyHeng10
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationGabriel Peyré
 
An application of the hyperfunction theory to numerical integration
An application of the hyperfunction theory to numerical integrationAn application of the hyperfunction theory to numerical integration
An application of the hyperfunction theory to numerical integrationHidenoriOgata
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30Ryan White
 
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsLow Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsGabriel Peyré
 
Model Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesModel Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesGabriel Peyré
 
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydH2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydSri Ambati
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse RepresentationGabriel Peyré
 

What's hot (19)

A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
 
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - Introduction
 
22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_final22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_final
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
Signal Processing Course : Convex Optimization
Signal Processing Course : Convex OptimizationSignal Processing Course : Convex Optimization
Signal Processing Course : Convex Optimization
 
Gibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inferenceGibbs flow transport for Bayesian inference
Gibbs flow transport for Bayesian inference
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
 
sada_pres
sada_pressada_pres
sada_pres
 
An application of the hyperfunction theory to numerical integration
An application of the hyperfunction theory to numerical integrationAn application of the hyperfunction theory to numerical integration
An application of the hyperfunction theory to numerical integration
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30
 
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
2018 MUMS Fall Course - Bayesian inference for model calibration in UQ - Ralp...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsLow Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
 
Model Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesModel Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular Gauges
 
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydH2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 

Viewers also liked

Effects of a Deterministic Hill climber on hBOA
Effects of a Deterministic Hill climber on hBOAEffects of a Deterministic Hill climber on hBOA
Effects of a Deterministic Hill climber on hBOAMartin Pelikan
 
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...Martin Pelikan
 
Spurious Dependencies and EDA Scalability
Spurious Dependencies and EDA ScalabilitySpurious Dependencies and EDA Scalability
Spurious Dependencies and EDA ScalabilityMartin Pelikan
 
Fitness inheritance in the Bayesian optimization algorithm
Fitness inheritance in the Bayesian optimization algorithmFitness inheritance in the Bayesian optimization algorithm
Fitness inheritance in the Bayesian optimization algorithmMartin Pelikan
 
Using Previous Models to Bias Structural Learning in the Hierarchical BOA
Using Previous Models to Bias Structural Learning in the Hierarchical BOAUsing Previous Models to Bias Structural Learning in the Hierarchical BOA
Using Previous Models to Bias Structural Learning in the Hierarchical BOAMartin Pelikan
 
iBOA: The Incremental Bayesian Optimization Algorithm
iBOA: The Incremental Bayesian Optimization AlgorithmiBOA: The Incremental Bayesian Optimization Algorithm
iBOA: The Incremental Bayesian Optimization AlgorithmMartin Pelikan
 
Efficiency Enhancement of Estimation of Distribution Algorithms
Efficiency Enhancement of Estimation of Distribution AlgorithmsEfficiency Enhancement of Estimation of Distribution Algorithms
Efficiency Enhancement of Estimation of Distribution AlgorithmsMartin Pelikan
 
Empirical Analysis of ideal recombination on random decomposable problems
Empirical Analysis of ideal recombination on random decomposable problemsEmpirical Analysis of ideal recombination on random decomposable problems
Empirical Analysis of ideal recombination on random decomposable problemskknsastry
 
Intelligent Bias of Network Structures in the Hierarchical BOA
Intelligent Bias of Network Structures in the Hierarchical BOAIntelligent Bias of Network Structures in the Hierarchical BOA
Intelligent Bias of Network Structures in the Hierarchical BOAMartin Pelikan
 
Towards billion bit optimization via parallel estimation of distribution algo...
Towards billion bit optimization via parallel estimation of distribution algo...Towards billion bit optimization via parallel estimation of distribution algo...
Towards billion bit optimization via parallel estimation of distribution algo...kknsastry
 
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...Martin Pelikan
 
Initial-Population Bias in the Univariate Estimation of Distribution Algorithm
Initial-Population Bias in the Univariate Estimation of Distribution AlgorithmInitial-Population Bias in the Univariate Estimation of Distribution Algorithm
Initial-Population Bias in the Univariate Estimation of Distribution AlgorithmMartin Pelikan
 
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOATransfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOAMartin Pelikan
 
The Bayesian Optimization Algorithm with Substructural Local Search
The Bayesian Optimization Algorithm with Substructural Local SearchThe Bayesian Optimization Algorithm with Substructural Local Search
The Bayesian Optimization Algorithm with Substructural Local SearchMartin Pelikan
 
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin Glasses
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin GlassesAnalyzing Probabilistic Models in Hierarchical BOA on Traps and Spin Glasses
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin GlassesMartin Pelikan
 
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...Martin Pelikan
 
Estimation of Distribution Algorithms Tutorial
Estimation of Distribution Algorithms TutorialEstimation of Distribution Algorithms Tutorial
Estimation of Distribution Algorithms TutorialMartin Pelikan
 

Viewers also liked (17)

Effects of a Deterministic Hill climber on hBOA
Effects of a Deterministic Hill climber on hBOAEffects of a Deterministic Hill climber on hBOA
Effects of a Deterministic Hill climber on hBOA
 
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...
Analysis of Evolutionary Algorithms on the One-Dimensional Spin Glass with Po...
 
Spurious Dependencies and EDA Scalability
Spurious Dependencies and EDA ScalabilitySpurious Dependencies and EDA Scalability
Spurious Dependencies and EDA Scalability
 
Fitness inheritance in the Bayesian optimization algorithm
Fitness inheritance in the Bayesian optimization algorithmFitness inheritance in the Bayesian optimization algorithm
Fitness inheritance in the Bayesian optimization algorithm
 
Using Previous Models to Bias Structural Learning in the Hierarchical BOA
Using Previous Models to Bias Structural Learning in the Hierarchical BOAUsing Previous Models to Bias Structural Learning in the Hierarchical BOA
Using Previous Models to Bias Structural Learning in the Hierarchical BOA
 
iBOA: The Incremental Bayesian Optimization Algorithm
iBOA: The Incremental Bayesian Optimization AlgorithmiBOA: The Incremental Bayesian Optimization Algorithm
iBOA: The Incremental Bayesian Optimization Algorithm
 
Efficiency Enhancement of Estimation of Distribution Algorithms
Efficiency Enhancement of Estimation of Distribution AlgorithmsEfficiency Enhancement of Estimation of Distribution Algorithms
Efficiency Enhancement of Estimation of Distribution Algorithms
 
Empirical Analysis of ideal recombination on random decomposable problems
Empirical Analysis of ideal recombination on random decomposable problemsEmpirical Analysis of ideal recombination on random decomposable problems
Empirical Analysis of ideal recombination on random decomposable problems
 
Intelligent Bias of Network Structures in the Hierarchical BOA
Intelligent Bias of Network Structures in the Hierarchical BOAIntelligent Bias of Network Structures in the Hierarchical BOA
Intelligent Bias of Network Structures in the Hierarchical BOA
 
Towards billion bit optimization via parallel estimation of distribution algo...
Towards billion bit optimization via parallel estimation of distribution algo...Towards billion bit optimization via parallel estimation of distribution algo...
Towards billion bit optimization via parallel estimation of distribution algo...
 
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
Using Problem-Specific Knowledge and Learning from Experience in Estimation o...
 
Initial-Population Bias in the Univariate Estimation of Distribution Algorithm
Initial-Population Bias in the Univariate Estimation of Distribution AlgorithmInitial-Population Bias in the Univariate Estimation of Distribution Algorithm
Initial-Population Bias in the Univariate Estimation of Distribution Algorithm
 
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOATransfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA
Transfer Learning, Soft Distance-Based Bias, and the Hierarchical BOA
 
The Bayesian Optimization Algorithm with Substructural Local Search
The Bayesian Optimization Algorithm with Substructural Local SearchThe Bayesian Optimization Algorithm with Substructural Local Search
The Bayesian Optimization Algorithm with Substructural Local Search
 
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin Glasses
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin GlassesAnalyzing Probabilistic Models in Hierarchical BOA on Traps and Spin Glasses
Analyzing Probabilistic Models in Hierarchical BOA on Traps and Spin Glasses
 
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...
Order Or Not: Does Parallelization of Model Building in hBOA Affect Its Scala...
 
Estimation of Distribution Algorithms Tutorial
Estimation of Distribution Algorithms TutorialEstimation of Distribution Algorithms Tutorial
Estimation of Distribution Algorithms Tutorial
 

Similar to Simplified Runtime Analysis of Estimation of Distribution Algorithms

Runtime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary AlgorithmsRuntime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary AlgorithmsPer Kristian Lehre
 
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
 
A Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeA Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeVjekoslavKovac1
 
Improving estimates for discrete polynomial averaging operators
Improving estimates for discrete polynomial averaging operatorsImproving estimates for discrete polynomial averaging operators
Improving estimates for discrete polynomial averaging operatorsVjekoslavKovac1
 
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)Zheng Mengdi
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
 
Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
 
Reading Seminar (140515) Spectral Learning of L-PCFGs
Reading Seminar (140515) Spectral Learning of L-PCFGsReading Seminar (140515) Spectral Learning of L-PCFGs
Reading Seminar (140515) Spectral Learning of L-PCFGsKeisuke OTAKI
 
MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化Akira Tanimoto
 
The Universal Bayesian Chow-Liu Algorithm
The Universal Bayesian Chow-Liu AlgorithmThe Universal Bayesian Chow-Liu Algorithm
The Universal Bayesian Chow-Liu AlgorithmJoe Suzuki
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)STAIR Lab, Chiba Institute of Technology
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...Jialin LIU
 
A Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cubeA Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cubeVjekoslavKovac1
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Daisuke Yoneoka
 

Similar to Simplified Runtime Analysis of Estimation of Distribution Algorithms (20)

Runtime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary AlgorithmsRuntime Analysis of Population-based Evolutionary Algorithms
Runtime Analysis of Population-based Evolutionary Algorithms
 
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
 
A Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeA Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cube
 
Improving estimates for discrete polynomial averaging operators
Improving estimates for discrete polynomial averaging operatorsImproving estimates for discrete polynomial averaging operators
Improving estimates for discrete polynomial averaging operators
 
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)2014 spring crunch seminar (SDE/levy/fractional/spectral method)
2014 spring crunch seminar (SDE/levy/fractional/spectral method)
 
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
 
Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...
 
Mcqmc talk
Mcqmc talkMcqmc talk
Mcqmc talk
 
Reading Seminar (140515) Spectral Learning of L-PCFGs
Reading Seminar (140515) Spectral Learning of L-PCFGsReading Seminar (140515) Spectral Learning of L-PCFGs
Reading Seminar (140515) Spectral Learning of L-PCFGs
 
MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化MLP輪読スパース8章 トレースノルム正則化
MLP輪読スパース8章 トレースノルム正則化
 
The Universal Bayesian Chow-Liu Algorithm
The Universal Bayesian Chow-Liu AlgorithmThe Universal Bayesian Chow-Liu Algorithm
The Universal Bayesian Chow-Liu Algorithm
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
stochastic processes assignment help
stochastic processes assignment helpstochastic processes assignment help
stochastic processes assignment help
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
A Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cubeA Szemerédi-type theorem for subsets of the unit cube
A Szemerédi-type theorem for subsets of the unit cube
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
 

Recently uploaded

Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticssakshisoni2385
 
Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)PraveenaKalaiselvan1
 
Natural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsNatural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsAArockiyaNisha
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoSérgio Sacani
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)Areesha Ahmad
 
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisRaman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisDiwakar Mishra
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bSérgio Sacani
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...anilsa9823
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)Areesha Ahmad
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...Sérgio Sacani
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPirithiRaju
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSSLeenakshiTyagi
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTSérgio Sacani
 
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡anilsa9823
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPirithiRaju
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.Nitya salvi
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )aarthirajkumar25
 
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptxPhysiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptxAArockiyaNisha
 

Recently uploaded (20)

Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)Recombinant DNA technology (Immunological screening)
Recombinant DNA technology (Immunological screening)
 
Natural Polymer Based Nanomaterials
Natural Polymer Based NanomaterialsNatural Polymer Based Nanomaterials
Natural Polymer Based Nanomaterials
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on Io
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisRaman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
 
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 bAsymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
Asymmetry in the atmosphere of the ultra-hot Jupiter WASP-76 b
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdfPests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
Pests of cotton_Borer_Pests_Binomics_Dr.UPR.pdf
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )
 
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptxPhysiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
 

Simplified Runtime Analysis of Estimation of Distribution Algorithms

  • 1. Simplified Runtime Analysis of Estimation of Distribution Algorithms Duc-Cuong Dang Per Kristian Lehre University of Nottingham United Kingdom Madrid, Spain July, 11-15th 2015
  • 2. Outline Background Previous Runtime Analyses of EDAs Univariate Marginal Distribution Algorithm (UMDA) Our Tool - Level-based Analysis Non-elitist Processes Level-based Theorem Our Results Warm up: LeadingOnes Onemax and Feige’s Inequality Conclusion
  • 3. Runtime Analysis of EAs Analysis of the time the EA requires to optimise a function f: expected number of fitness evaluations expressed asymptotically wrt instance size n dependence on characteristics of the problem parameter settings of the EA
  • 4. Previous Work (1+1) EA cGA1 Onemax Θ(n log n) Θ(K √ n) [Droste, 2006] EDA worse Linear functions Θ(n log n) Ω(Kn) [Droste, 2006] EDA worse (µ+1) EA cGA2 Onemax + N(0, σ2) superpoly(n) whp O(Kσ2√ n log Kn) [Friedrich et al., 2015] EDA better (1+1) EA UMDA LeadingOnes Θ(n2) O(λn), λ = ω(n2) [Chen et al., 2007] BVLeadingOnes Θ(n2) ∞ w.o.p. [Chen et al., 2010] w/o margins O(λn), λ = ω(n2) [Chen et al., 2010] SubString 2Ω(n) w.o.p. O(λn), λ = ω(n2) [Chen et al., 2009] EDA better 1 K = n1/2+ε 2 K = ω(σ2√ n log n) 3 λ = Ω(log n)
  • 5. Previous Work (1+1) EA cGA1 Onemax Θ(n log n) Θ(K √ n) [Droste, 2006] EDA worse Linear functions Θ(n log n) Ω(Kn) [Droste, 2006] EDA worse (µ+1) EA cGA2 Onemax + N(0, σ2) superpoly(n) whp O(Kσ2√ n log Kn) [Friedrich et al., 2015] EDA better (1+1) EA UMDA LeadingOnes Θ(n2) O(λn), λ = ω(n2) [Chen et al., 2007] BVLeadingOnes Θ(n2) ∞ w.o.p. [Chen et al., 2010] w/o margins O(λn), λ = ω(n2) [Chen et al., 2010] SubString 2Ω(n) w.o.p. O(λn), λ = ω(n2) [Chen et al., 2009] EDA better LeadingOnes Θ(n2) O(nλ log λ + n2) this paper3 Onemax Θ(n log n) O(nλ log λ) this paper 1 K = n1/2+ε 2 K = ω(σ2√ n log n) 3 λ = Ω(log n)
  • 6. Univariate Marginal Distribution Algorithm 1: Initialise the vector p0 := (1/2, . . . , 1/2). 2: for t = 0, 1, 2, . . . do 3: Sample λ bitstrings y1, . . . , yλ according to the distribution pt(x) = n i=1 pt(i)xi (1 − pt(i))1−xi 4: Let y(1), . . . , y(λ) be the bitstrings sorted by fitness func. f 5: Compute the next vector pt+1 according to pt+1(i) := Xi µ where Xi := µ j=1 y (j) i 6: end for
  • 7. Univariate Marginal Distribution Algorithm 1: Initialise the vector p0 := (1/2, . . . , 1/2). 2: for t = 0, 1, 2, . . . do 3: Sample λ bitstrings y1, . . . , yλ according to the distribution pt(x) = n i=1 pt(i)xi (1 − pt(i))1−xi 4: Let y(1), . . . , y(λ) be the bitstrings sorted by fitness func. f 5: Compute the next vector pt+1 according to pt+1(i) :=    1 n if Xi = 0 Xi µ if 1 ≤ Xi ≤ µ − 1 1 − 1 n if Xi = µ, where Xi := µ j=1 y (j) i . 6: end for
  • 8. UMDA on Onemax (n = 5000) 0 10 20 30 40 50 0.00.20.40.60.81.0 iteration
  • 9. Non-elitist populations X Pt+1 = (y1 , y2 , . . . , yλ ) where yi ∼ D(Pt)
  • 10. Non-elitist populations X Pt+1 = (y1 , y2 , . . . , yλ ) where yi ∼ D(Pt) Example (UMDA) D(P)(x) := n i=1 pxi i (1 − pi)1−xi
  • 11. Level-Based Theorem D(Pt) AmA1 Aj Aj+1 · · · Pt γ0λ γλ X = A1 ⊃ A2 ⊃ · · · ⊃ Am−1 ⊃ Am = A
  • 12. Level-Based Theorem ≥ γ(1 + δ) ≥ zj D(Pt) A+ mA+ 1 A+ j A+ j+1 · · · Pt γ0λ γλ Theorem (Corus, Dang, Ereemeev, Lehre (2014)) If for any level j < m and population P where |P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ an individual y ∼ D(P) is in Aj+1 with Pr (y ∈ Aj+1) ≥ γ(1 + δ) if γ > 0 zj if γ = 0 and the population size λ is at least λ = Ω ln(m/(δzj))/δ2 then level Am is reached in expected time O   1 δ5  m ln λ + m j=1 1 λzj     .
  • 14. LeadingOnes LeadingOnes(x) := n i=1 i j=1 xi Theorem The expected optimisation time of UMDA with λ ≥ b ln(n) for some constant b > 0, λ > (1 + δ)eµ on LeadingOnes is O(nλ ln(λ) + n2 ).
  • 15. Proof idea Level definition x ∈ Aj ⇐⇒ LeadingOnes(x) ≥ j If |P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ > 0 then Pr(y ∈ Aj+1) ≥ γ(1+δ)
  • 19. Onemax Onemax(x) = n i=1 xi Theorem The expected optimisation time of UMDA with λ ≥ b ln(n) for some constant b > 0, µ < min{λ/(13e), n} on Onemax is O(nλ ln λ).
  • 20. Proof idea (ignoring margins) Recall definition of UMDA Probability for i-th position (assuming within margins) pi := Xi µ where Xi := µ j=1 y (j) i
  • 21. Proof idea (ignoring margins) Recall definition of UMDA Probability for i-th position (assuming within margins) pi := Xi µ where Xi := µ j=1 y (j) i Definition of levels and a first observation Choosing levels x ∈ Aj ⇐⇒ Onemax(x) ≥ j, need to show |P ∩ Aj| ≥ γ0λ > |P ∩ Aj+1| =: γλ (1) =⇒ Pr (Y ∈ Aj+1) ≥ γ(1 + δ) (2) Note that assumption (1) with γ0 := µ/λ implies n i=1 Xi ≥ µj + γλ
  • 22. Proof idea (taking into account margins)
  • 23. Proof idea (taking into account margins) Pr (Y ∈ Aj+1) ≥ Pr Y1,k > γλ µ + j − · Pr (Yk+1,k+ +1 = ) ≥ Pr Y1,k > E [Y1,k] − γλ 12µ · 1 − 1 n
  • 24. Feige’s Inequality i E [Yi] Theorem Given n independent r.v. Y1, . . . , Yn ∈ [0, 1], then for all δ > 0 Pr n i=1 Yi > n i=1 E [Yi] − δ ≥ min 1 13 , δ 1 + δ
  • 25. Proof idea Pr (Y ∈ Aj+1) ≥ Pr Y1,k > γλ µ + j − · Pr (Yk+1,k+ +1 = µ ) ≥ Pr Y1,k > E [Y1,k] − γλ 12µ · 1 − 1 n ≥ min 1 13 , γλ 12µ γλ 12µ + 1 · 1 e ≥ γλ 13eµ
  • 26. Proof idea Pr (Y ∈ Aj+1) ≥ Pr Y1,k > γλ µ + j − · Pr (Yk+1,k+ +1 = µ ) ≥ Pr Y1,k > E [Y1,k] − γλ 12µ · 1 − 1 n ≥ min 1 13 , γλ 12µ γλ 12µ + 1 · 1 e ≥ γλ 13eµ ≥ γ(1 + δ) if λ ≥ 13e(1 + δ)µ
  • 27. Conclusion and Future Work The recent level-based method seems well suited for EDAs Straightforward runtime analysis of the UMDA Trivial analysis of LeadingOnes, smaller populations suffice, i.e., O(ln n) vs ω(n2 ) First upper bound on Onemax How tight are the upper bounds? o(n ln n) on Onemax? Other problems and algorithms linear functions multi-variate EDAs
  • 28. Thank you The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 618091 (SAGE).
  • 29. References Chen, T., Lehre, P. K., Tang, K., and Yao, X. (2009). When is an estimation of distribution algorithm better than an evolutionary algorithm? In Proceedings of the 10th IEEE Congress on Evolutionary Computation (CEC 2009), pages 1470–1477. IEEE. Chen, T., Tang, K., Chen, G., and Yao, X. (2007). On the analysis of average time complexity of estimation of distribution algorithms. In Proceedings of 2007 IEEE Congress on Evolutionary Computation (CEC’07), pages 453–460. Chen, T., Tang, K., Chen, G., and Yao, X. (2010). Analysis of computational time of simple estimation of distribution algorithms. IEEE Trans. Evolutionary Computation, 14(1):1–22. Droste, S. (2006). A rigorous analysis of the compact genetic algorithm for linear functions. Natural Computing, 5(3):257–283. Friedrich, T., K¨otzing, T., Krejca, M. S., and Sutton, A. M. (2015). The benefit of sex in noisy evolutionary search. CoRR, abs/1502.02793.