BBComp

EMO 2017

Standing: 1/10

Model-based hypervolume maximization: This was a rather greedy approach avoiding expected hypervolume improvement.

GECCO 2016

All algorithms are implemented in Python. The source code is provided here. Additionally, the packages numpy, scipy, cma, optproblems, dfoalgos, and evoalgos must be installed.

1OBJ

Standing: 1/14

This algorithm is a further development of last year's entry. Based on data from a preliminary experiment on MPM2 test problems, a decision tree was built to decide which two-stage variant to apply on which dimension. The two-stage options were restarted local search and clustering method, the local search algorithms Nelder-Mead and CMA-ES. Before the two-stage algorithm starts, a singular L-BFGS-B run is tried.

The main changes compared to last year are:

  • Nelder-Mead is used for up to ten dimensions (inclusive). The implementation in dfoalgos is used.
  • CMA-ES is updated to version 1.1.06.
  • 10% of the budget are reserved for a final local optimization of the best solution found so far.
  • The stopping criteria in the first 90% of the budget are much more aggressive than before.
  • For the clustering method, the global sample size was reduced to 25n. To compensate for this reduction, the point sets are now aggregated over the two-stage iterations.
  • Nearest-better clustering was modified according to the results in: Simon Wessing, Günter Rudolph, Mike Preuss. Assessing Basin Identification Methods for Locating Multiple Optima. In Panos M. Pardalos, Anatoly Zhigljavsky, and Julius Žilinskas (eds.): Advances in Stochastic and Deterministic Global Optimization, Springer, 2016

1OBJ-expensive

Standing: 3/15

The same code as for BBComp2016-1OBJ, but with a slightly different configuration. Local search algorithms are the same, but only restarted local search is used. The second difference is that all ever sampled points are put into an archive and considered in the global stage.

2OBJ/3OBJ

Standing 2OBJ: 1/7, Standing 3OBJ: 2/7

This approach recycles the existing code for the single-objective tracks. The first 75% of the budget are spent on single-objective restarted local search. An objective of the problem is randomly chosen and a local search is run on it. Then, another objective is randomly chosen and the previously found optimum is used as starting point for this objective. This strategy is iterated until the allocated budget is exhausted. The motivation is to find some hopefully Pareto-optimal local optima, under the assumption that the problem is multimodal. The last 25% of the budget are spent on a (100 + 5)-SMS-EMOA with self-adaptive Gaussian mutation and no recombination, started from the non-dominated points found so far. This algorithm explicitly optimizes hypervolume and thus is hopefully able to provide some refinement.

CEC 2015

Standing: 2/25

The core concept of the submitted entry is to use a sequence of local searches. Every optimization begins with an L-BFGS-B (SciPy) run starting at the centroid of the search space. Starting points and obtained optima are generally recorded in an archive. Starting points for the remaining local searches are not chosen randomly, but with an adaptive sampling algorithm maximizing the minimal distance of the starting point to points in the archive. This sampling algorithm is called maximin reconstruction (MmR, Python implementation). As a distance metric we consider the Manhattan distance on a torus.

The search space dimension decides how the budget is spent exactly:

  • If the dimension is five or smaller, a restarted Nelder-Mead algorithm (SciPy implementation, but with modified simplex initialization) is used for optimization.
  • Between dimensions eight and twenty, inclusive, a clustering method is used with CMA-ES (Python version 1.1.02) as local search component. The algorithm alternates between two stages until the budget is exhausted. In the global stage, a sample of 50n points is drawn with MmR. Nearest-better clustering is then employed to select a variable number of starting points for local search. Once the local stage has finished, we go back to global sampling under consideration of the now extended archive.
  • For higher dimensions, CMA-ES is used in the same restart fashion as Nelder-Mead.

The initial step size is chosen in any case as min(0.1, max(0.005, 0.5 * minDistance)), where minDistance is the distance to the start point's nearest neighbor in the archive. The parameter tolfun of CMA-ES is set to 1e-6. All other algorithm parameters are kept at their respective default levels.

Details about the sampling algorithm MmR and more information about two-stage algorithms in general can be found in my dissertation. A Python implementation of MmR is available in the package diversipy.

 
Last modified: 2017-06-23 16:51 (external edit)
DokuWikiRSS-Feed