Swarm Intelligence (PSO + ACO)
Swarm intelligence is concerned with the design of intelligent multiagent systems by
taking inspiration from the collective behavior of social insects such as ants,
termites, bees, and wasps, as well as from other animal societies such as flocks of
birds or fish schools. Colonies of social insects have mesmerized researchers for
many decades, and the mechanisms that govern their behavior remained unknown for a
long time. Even though the single members of these colonies are non-sophisticated
individuals, they are able to achieve complex tasks in cooperation. Coordinated
colony behavior emerges from relatively simple actions or interactions between the
colonies' individual members.
Optimization techniques based on swarm intelligence have become increasingly popular
during the last decade. They are characterized by a decentralized way of working
that mimics the behavior of swarms of social insects, flocks of birds, or fish
schools. The advantage of these approaches over traditional techniques is their
robustness and flexibility. These properties make swarm intelligence a successful
design paradigm for algorithms that deal with increasingly complex problems. After a
general introduction, we will focus in this tutorial on two of the most successful
examples of optimization techniques inspired by swarm intelligence: ant colony
optimization, inspired by the foraging behavior of real ant colonies, and particle
swarm optimization, inspired by bird flocking.
Transportation and Logistics
Transportation plays an important role in many industries as well as in
daily life. The increase of global trade and growing energy costs lead
to an increasing demand for efficient transportation and logistics. We
present an overview of problems arising in transportation and review
applications of evolutionary computation and other metaheuristics in the
context of logistics. Like in other problem domains, the success of
evolutionary algorithms depends on the choice of representation and
search space, locality of operators and integration of local search.
Problems studied in academia typically differ significantly from
real-world problems. We give an overview of a commercial transportation
management system and its planning capabilities. The system allows
solving real-world vehicle routing problems and employs evolutionary
local search as solution approach. We contrast the real-world vehicle
routing problem from its academic counterparts and give an outlook for
challenging problems of high practical relevance, which represent
promising research opportunities in the area of transportation,
logistics and metaheuristics.
Computational Complexity of Evolutionary Computation in Combinatorial Optimization
Randomized search heuristics such as evolutionary
algorithms or ant colony optimization have been shown to be very
successful when dealing with real-world applications or problems from
combinatorial optimization. In recent years, a lot of progress has
been made in understanding the runtime behavior of these heuristics
from a theoretical point of view.
This is an important research area where a lot of interesting
questions are still open. In this tutorial, we would like to give an
overview how to analyze randomized search heuristics with respect to
the number of solutions they need to produce until a good one has been
found. We start by considering the optimization of artificial
pseudo-boolean functions and point out some general methods. After
that, we consider some well-known combinatorial optimization problems
such as minimum spanning trees, Eulerian cycles, and the NP-hard
partition problem and show how the runtime behavior of different
randomized search heuristics can be analyzed for these problems.
A Unified Approach to EC
The field of Evolutionary Computation has experienced tremendous growth over
the past 20 years, resulting in a wide variety of evolutionary algorithms and
applications. The result poses an interesting dilemma for many practitioners in
the sense that, with such a wide variety of algorithms and approaches, it is often
hard to se the relationships between them, assess strengths and weaknesses,
and make good choices for new application areas.
This tutorial is intended to give an overview of a general EC framework that can
help compare and contrast approaches, encourages crossbreeding, and
facilitates intelligent design choices. The use of this framework is then illustrated
by showing how traditional EAs can be compared and contrasted with it, and how
new EAs can be effectively designed using it.
Finally, the framework is used to identify some important open issues that need
Bioinformatics is concerned with trying to make sense of the
huge and growing amount of biological data that is emerging
from modern biotechnology. We have immense amounts of sequence
information about genes and proteins, lots of partial information
about the roles and functions of proteins and many other
biomolecules, and lots of information that connects these data
with disease and other state information about biological systems.
But, to be able to extract useful knowledge and tools from these
data, optimisation, machine learning, and statistics expertise
is essential. Even more essential, however, is for there to be
scientists who understand both the biology and the advanced
computer science. The tutorial will assume that you know about
optimisation and machine learning, but you want to learn what the
opportunities are for applying your knowledge in the bioinformatics
arena. So, the tutorial will start with some relevant biology basics,
and then move on to discuss and explore some of the classification
and optimisation problems that need your expertise.
Evolution Strategies and Related Estimation of Distribution Algorithms (ES feat. EDA)
Evolution Strategies and some continuous domain Estimation
of Distribution Algorithms are stochastic optimization
procedures based on sampling multivariate Gaussian
(normal) distributions. They can be formulated in a
common, unified, but still very simple framework. Such a
framework is very useful to understand subtle differences
This tutorial focuses on the most relevant question: how
should the parameters of the sample distribution be chosen,
and how should they be updated in the generation sequence?
The most common and successful approaches are reviewed,
including self-adaptation, success rules, path length
control, Covariance Matrix Adaptation (CMA), and Estimation
of Multivariate Normal Algorithm (EMNA).
Methods are motivated with respect to the difficulties one
has to face when solving continuous domain non-linear,
non-convex optimization problems. Specific problem
difficulties will be discussed in the beginning, for
example ill-conditioning and non-separability. Finally, the
performance of state-of-the art Evolution Strategies is
related to other well-known evolutionary and
non-evolutionary optimization algorithms, namely BFGS, DE,
Multi-Objective Evolutionary Algorithms (EMOA)
Multiple, often conflicting objectives arise naturally
in most real-world optimization scenarios. As evolutionary
algorithms possess several characteristics that are desirable for
this type of problem, this class of search strategies has
been used for multiobjective optimization for more than a decade.
Meanwhile evolutionary multiobjective optimization has become
established as a separate subdiscipline combining the fields of
evolutionary computation and classical multiple criteria decision
This tutorial gives an overview of evolutionary multiobjective
optimization with the focus on methods and theory. On the one hand,
basic principles of multiobjective optimization are presented, and
various algorithmic aspects such as fitness assignment and environmental
selection are discussed in the light of state-of-the-art techniques. On
the other hand, the tutorial covers several theoretical issues such as
performance assessment and running-time analysis.
This tutorial provides a practical introduction to game strategy
learning with function approximation architectures. The tutorial will
cover the two main approaches to learning game strategy: evolution
(including co-evolution), and temporal difference learning.
We also look at how the choice of input features and function
approximation architecture has a critical impact on what is learned, as
well as how it is interfaced to the game (e.g. as a value estimator or
as an action selector). In addition to standard MLPs, attention is also
given to N-Tuple systems and interpolated table-based approximators, as
these have recently shown great potential to learn quickly and
Each method will be demonstrated with reference to some simple fragments
of software, illustrating how the learning algorithm is connected with
the game and with the function approximation architecture. Example games
will include Othello, Simulated Car Racing, and Ms. Pac-Man.
Recent results on the information rate limits attainable for these
learning algorithms (in terms of bits of information gained per game
played) will also be discussed, together with a simple test game that
enables direct testing of these information rates.
The Future of Experimental Research
It is an open secret that the performance of algorithms depends on their
parameterizations --- and of the parameterizations of the problem
However, these dependencies can be seen as means for understanding
Based on modern statistical techniques we demonstrate how to tune and
We present a comprehensive, effective and very efficient methodology for
the design and experimental analysis of search heuristics such as
evolutionary algorithms, differential evolution, pattern search or even
classical deterministic methods such as the Nelder-Mead simplex algorithm.
Our approach extends the sequential parameter optimization (SPO) method
that has been successfully applied as a tuning procedure to numerous
heuristics for practical and theoretical optimization problems.
Optimization practitioners receive valuable hints for choosing an
adequate heuristic for their optimization problems --- theoreticians
receive guidelines for testing results systematically on real problem
Based on several examples from theory and practice we demonstrate how
SPO improves the performance of many search heuristics significantly.
However, this performance gain is not available for free. Therefore,
costs of this tuning process are discussed.