Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/138713
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Evolving Sampling Strategies for One-Shot Optimization Tasks
Author: Bossek, J.
Doerr, C.
Kerschke, P.
Neumann, A.
Neumann, F.
Citation: Lecture Notes in Artificial Intelligence, 2020 / Bäck, T., Preuss, M., Deutz, A.H., Wang, H., Doerr, C., Emmerich, M.T.M., Trautmann, H. (ed./s), vol.12269, pp.111-124
Publisher: Springer
Publisher Place: Cham, Switzerland
Issue Date: 2020
Series/Report no.: Lecture Notes in Computer Science; 12269
ISBN: 9783030581114
ISSN: 0302-9743
1611-3349
Conference Name: 16th International Conference on Parallel Problem Solving from Nature (PPSN) (5 Sep 2020 - 9 Sep 2020 : Leiden, The Netherlands)
Editor: Bäck, T.
Preuss, M.
Deutz, A.H.
Wang, H.
Doerr, C.
Emmerich, M.T.M.
Trautmann, H.
Statement of
Responsibility: 
Jakob Bossek, Carola Doerr, Pascal Kerschke, Aneta Neumann, and Frank Neumann
Abstract: One-shot optimization tasks require to determine the set of solution candidates prior to their evaluation, i.e., without possibility for adaptive sampling. We consider two variants, classic one-shot optimization (where our aim is to find at least one solution of high quality) and one-shot regression (where the goal is to fit a model that resembles the true problem as well as possible). For both tasks it seems intuitive that well-distributed samples should perform better than uniform or grid-based samples, since they show a better coverage of the decision space. In practice, quasi-random designs such as Latin Hypercube Samples and low-discrepancy point sets are indeed very commonly used designs for one-shot optimization tasks. We study in this work how well low star discrepancy correlates with performance in one-shot optimization. Our results confirm an advantage of low-discrepancy designs, but also indicate the correlation between discrepancy values and overall performance is rather weak. We then demonstrate that commonly used designs may be far from optimal. More precisely, we evolve 24 very specific designs that each achieve good performance on one of our benchmark problems. Interestingly, we find that these specifically designed samples yield surprisingly good performance across the whole benchmark set. Our results therefore give strong indication that significant performance gains over state-of-the-art one-shot sampling techniques are possible, and that evolutionary algorithms can be an efficient means to evolve these.
Keywords: One-shot optimization; Regression; Fully parallel search; Surrogate-assisted optimization; Continuous optimization
Rights: © Springer Nature Switzerland AG 2020
DOI: 10.1007/978-3-030-58112-1_8
Grant ID: http://purl.org/au-research/grants/arc/DP190103894
Published version: https://doi.org/10.1007/978-3-030-58112-1
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.