English
 
Help Privacy Policy Disclaimer
  Advanced SearchBrowse

Item

ITEM ACTIONSEXPORT

Released

Journal Article

Training Strategies for Deep Learning Gravitational-Wave Searches

MPS-Authors
/persons/resource/persons231230

Schäfer,  Marlin
Binary Merger Observations and Numerical Relativity, AEI-Hannover, MPI for Gravitational Physics, Max Planck Society;

/persons/resource/persons214778

Nitz,  Alexander H.
Observational Relativity and Cosmology, AEI-Hannover, MPI for Gravitational Physics, Max Planck Society;

/persons/resource/persons4364

Ohme,  Frank
Binary Merger Observations and Numerical Relativity, AEI-Hannover, MPI for Gravitational Physics, Max Planck Society;

External Resource
No external resources are shared
Fulltext (restricted access)
There are currently no full texts shared for your IP range.
Fulltext (public)

2106.03741.pdf
(Preprint), 3MB

PhysRevD.105.043002.pdf
(Publisher version), 4MB

Supplementary Material (public)
There is no public supplementary material available
Citation

Schäfer, M., Zelenka, O., Nitz, A. H., Ohme, F., & Brügmann, B. (2022). Training Strategies for Deep Learning Gravitational-Wave Searches. Physical Review D, 105: 043002. doi:10.1103/PhysRevD.105.043002.


Cite as: https://hdl.handle.net/21.11116/0000-0008-AB79-0
Abstract
Compact binary systems emit gravitational radiation which is potentially
detectable by current Earth bound detectors. Extracting these signals from the
instruments' background noise is a complex problem and the computational cost
of most current searches depends on the complexity of the source model. Deep
learning may be capable of finding signals where current algorithms hit
computational limits. Here we restrict our analysis to signals from
non-spinning binary black holes and systematically test different strategies by
which training data is presented to the networks. To assess the impact of the
training strategies, we re-analyze the first published networks and directly
compare them to an equivalent matched-filter search. We find that the deep
learning algorithms can generalize low signal-to-noise ratio (SNR) signals to
high SNR ones but not vice versa. As such, it is not beneficial to provide high
SNR signals during training, and fastest convergence is achieved when low SNR
samples are provided early on. During testing we found that the networks are
sometimes unable to recover any signals when a false alarm probability
$<10^{-3}$ is required. We resolve this restriction by applying a modification
we call unbounded Softmax replacement (USR) after training. With this
alteration we find that the machine learning search retains $\geq 97.5\%$ of
the sensitivity of the matched-filter search down to a false-alarm rate of 1
per month.