SystemPerformanceOptimizer Class |
Namespace: Novacta.Analytics.Advanced
The SystemPerformanceOptimizer type exposes the following members.
Name | Description | |
---|---|---|
SystemPerformanceOptimizer | Initializes a new instance of the SystemPerformanceOptimizer class |
Name | Description | |
---|---|---|
PerformanceEvaluationParallelOptions |
Gets or sets options that configure the operation of the
Parallel class while computing
performance evaluations.
(Inherited from CrossEntropyProgram.) | |
SampleGenerationParallelOptions |
Gets or sets options that configure the operation of the
Parallel class while generating
sample points.
(Inherited from CrossEntropyProgram.) |
Name | Description | |
---|---|---|
Equals | Determines whether the specified object is equal to the current object. (Inherited from Object.) | |
EvaluatePerformances |
Evaluates the performance of the points in the sample drawn
in the current iteration.
(Inherited from CrossEntropyProgram.) | |
Finalize | Allows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection. (Inherited from Object.) | |
GetHashCode | Serves as the default hash function. (Inherited from Object.) | |
GetType | Gets the Type of the current instance. (Inherited from Object.) | |
MemberwiseClone | Creates a shallow copy of the current Object. (Inherited from Object.) | |
Optimize |
Runs a Cross-Entropy program designed to optimize
a system's performance in the
specified context.
| |
Run |
Runs this Cross-Entropy program in the specified context.
(Inherited from CrossEntropyProgram.) | |
Sample |
Draws a sample having the specified size from a distribution
defined in the given context and characterized by the
designated parameter.
(Inherited from CrossEntropyProgram.) | |
ToString | Returns a string that represents the current object. (Inherited from Object.) |
The Cross-Entropy method for combinatorial and multi-extremal optimization
The current implementation of the SystemPerformanceOptimizer class is based on the main Cross-Entropy program for optimization proposed by Rubinstein and Kroese (Algorithm 4.2.1)[1] .
The Cross-Entropy method raises as a solution to the problem of estimating the probability of rare events regarding the performance of complex systems. However, soon the method was recognized as closely related to the optimization of continuous or discrete functions. In fact, optimizing a function is a task that can be accomplished by exploiting the Cross-Entropy machinery aimed to estimate the probability of rare events, since the event represented by a function assuming a value over a given extreme threshold can be typically interpreted as rare.
The Cross-Entropy algorithm for optimization is an adapted version of that for rare event simulation. As the latter, it is an iterative multi step procedure, having two main steps. In the sampling step, the state-space of the system (the domain of the performance function , ) is traversed looking for good candidate points at which global performance extrema can be attained. In the subsequent updating step, the distribution from which the states are sampled is modified in order to increase, in the next iterations, the probability of sampling near the true optimizer, say .
Sampling step
At each iteration , a sampling step is
executed to generate states of the system which are
interpreted as candidate arguments of the performance
function. It is assumed that such sample is drawn
from a specific density or probability function,
member of a parametric family
where is a vector representing
a possible state of the system,
and is the set of allowable
values for parameter .
The parameter exploited at iteration
is referred to as the reference parameter at
and indicated as
.
The parametric family must be carefully chosen. In particular, the first reference parameter, , must be such that no relevant subsets of the state-space will receive a too low probability: when starting the search for an optimizer, all possible states must have a real chance of being selected. If there is no a-priori knowledge about the position of the optimizer inside the state-space, it would be very useful if the parametric family could include a parameter such that the corresponding distribution were - at least approximately - uniform on the system's state-space. That parameter would be the perfect candidate for .
Furthermore, it is expected that, given a reference parameter , a corresponding reasonable value could be guessed for the optimal argument, say . In this way, by modifying appropriately the reference parameter, one can suppose to better foresee the optimal argument. Finally, given the optimal reference parameter, say (the parameter corresponding to the last iteration executed by the algorithm before stopping), it will be possible to select via the best argument at which the searched extremum can be considered as reached according to the Cross-Entropy method.
Updating step
Given the sample drawn at iteration from distribution , the main idea behind the Cross-Entropy updating step is that it must be designed to improve the concentration of the probability mass near . To obtain such result, the updating procedure is based on the sampled points whose performances are more relevant given the optimization goal of the algorithm: if one is maximizing the performance function, then the updating step relies on the states having the highest observed performances; otherwise, the update of the reference parameter is supported by those sampled states corresponding to the lowest performances.
The relevant points are known as elite sample points and can be defined as follows. The sample performances, , are computed and sorted in increasing order, say obtaining the following ordering:Let be the sampled state such that , , and consider a constant, referred to as the rarity of the program, say , set to a value in the interval .
If the optimization goal is a maximization, then the points occupying a position greater than or equal to are the elite sampled states, with being the ceiling function. In this case, the current iteration is said to reach a performance level equal to .
Otherwise, if the optimization goal is a minimization, then the elite sampled states can be defined as those points occupying a position less than or equal to , where is the floor function. Such points are thus , and the current iteration is said to reach a performance level equal to .
The joint distribution from which the states will be sampled in the next iteration is updated exploiting the elite states only: in this way, given that the performances of the elite points are likely to be similar to that at , it can be expected that such choice will push towards .
In addition, to achieve the convergence of the algorithm,
the mass also needs to accumulate near .
Such result can be obtained by estimating via the Cross-Entropy
method the probability of
a rare event: in case of maximization, the event of interest
being
or, if must be minimized,
The target probability can thus be represented as
where is
or
, and
is the
indicator function
of .
The Cross-Entropy method selects another
density in by identifying a
parameter
that guarantees the most accurate estimate of the target probability
for a specified simulation effort, or sample size
, by taking into account
the Likelihood estimator based on the ratio
where
is the density of
conditional on .
Such estimator is unusable, since it depends on the target probability.
However, it is ideally optimal because its variance is
equal to zero.
For this reason, it is proposed to select the
parameter such that the
corresponding density is
the closest to the optimal one,
the
difference between two densities, say
and
, being measured as a
Kullback–Leibler divergence:
The selection is thus obtained by
minimizing the divergence between the generic density in
, say
, and
the ideal
density :
Such minimization corresponds to the maximization
where is the
expectation operator
under .
This implies that, given a
sample
from ,
a solution to such
optimization can be estimated through the solution to the
program:
where
and
The interest in such an approach is due to the possibility
of choosing the new parameter
under which the event
becomes less rare under density
than under density :
the mass concentration
becomes higher.
In this way, by decreasing the rareness of
,
a smaller simulation effort would be required
in order to obtain a good estimation,
since a higher mass concentration is attained.
The iteration parameter
is thus chosen
to be the optimal one in
for estimating
:
and can be interpreted as the
Maximum Likelihood estimate of based
only on the elite sample points.
By default, the algorithm stops iterating when the iteration level does not change for a predefined number of iterations, or the number of executed iterations reaches a specified maximum.
Executing a Cross-Entropy optimizer
Class SystemPerformanceOptimizer follows the Template Method pattern[2] by defining in an operation the structure of a Cross-Entropy algorithm aimed to optimize the performance of a system. Optimize is the template method, in which the invariant parts of a Cross-Entropy optimizer are implemented once, deferring the implementation of behaviors that can vary to a SystemPerformanceOptimizationContext instance, that method Optimize receives as a parameter.
SystemPerformanceOptimizationContext inherits or defines abstract or virtual methods which represent some primitive operations of the template method, i.e. those varying steps that its subclasses will override to define the concrete behavior of the algorithm. See the documentation of class SystemPerformanceOptimizationContext for examples of how to implement a context for system's performance optimization.