1 Introduction

Spaceborne gravimetry is a well-established tool of today’s Earth Observation. Measurement of the gravity field reveals Earth’s state of mass balance and its dynamics; provides the geoid as the reference for sea level, global ocean circulation and height systems; and the variations in gravity and in the geoid measure mass exchange processes in the earth system (Rummel 2010).

GOCE (Floberghagen et al. 2011), in orbit from 2009 to 2013, delivered a gravity map of Earth with accuracy of \(2{-}3\,\hbox {cm}\) for the geoid and \({0.7}\,\hbox {mGal}\) for the gravity anomalies, at a spatial resolution of \({100}\,\hbox {km}\). GOCE’s instruments measured full tensor gravity gradients (four high accuracy components, two of lower quality) and GPS positions while orbiting at mean orbit altitudes of \({255}\,\hbox {km}\) (nominal mission) and \({225}\,\hbox {km}\) (extended mission), in drag-free mode.

GRACE (Tapley et al. 2019), in orbit from 2002 to 2017, provided monthly estimates of Earth’s global gravity field at scales of a few hundred kilometres and larger. The time variations in the gravity field were used to determine changes in Earth’s mass distribution, with applications ranging from measurement of continental water content (seasonal changes in large river basins, groundwater extraction), to ice and snow accumulation and depletion in the polar regions and large glaciers, to monitoring of global mean eustatic sea-level variations. GRACE applied for the first time satellite-to-satellite tracking (SST) in low Earth orbit for measuring Earth’s gravity using a microwave ranging instrument. This technique is implemented by a pair of satellites orbiting at low altitude, chasing one another at approximately constant distance (Fig. 1). As gravity perturbs the orbits of the two satellites, their distance apart, d, changes with time. By measuring the overall distance variations, \(\varDelta d\), collected over a certain time period, the map of the average gravity field in that period can be retrieved, provided that the effect, \(\varDelta d_{\mathrm {D}}\), of the non-gravitational forces, \(F_{\mathrm {D}}\), is properly measured by accelerometer(s) and subtracted in the data processing for insulating the contribution \(\varDelta d_{\mathrm {G}}\) from the geopotential. GRACE consisted of two identical satellites in near-circular, polar (\({89}^{\circ }\) inclination) orbits, initially at \({500}\,\hbox {km}\) altitude, at mutual along-track distance varying between 170 and \({270}\,\hbox {km}\). The instantaneous biased distance measured by a dual-band microwave ranging instrument (\({24}\,\hbox {GHz}\), \({32}\,\hbox {GHz}\)) was the main observable, supplemented by GPS positions and non-gravitational acceleration data measured by high precision accelerometers. The satellite altitude decayed naturally under air drag down to about \({320}\,\hbox {km}\) at the end of the mission, and the ground track did not have a fixed repeat pattern. GRACE Follow-On (GRACE-FO; Kornfeld et al. 2019), launched on 22 May 2018, is meant to continue the GRACE time series for at least 5 years, with largely unchanged on board systems, plus technology demonstration of a more precise, laser-based ranging interferometer.

Fig. 1
figure 1

Principle of the SST technique

Acceleration measurement errors (temperature drifts, angular rates), the relatively high and variable altitude and the one-dimensional north–south sampling are known limits to the GRACE gravity models. Improved spacecraft designs (thermal control, attitude measurement and control) can improve the systematic errors somewhat. Beyond that, however, aliasing due to poor sampling of high frequency ocean and atmospheric mass variations dominates and even a substantially improved instrument such as the laser ranging interferometer on board GRACE-FO cannot deploy its full potential (Flechtner et al. 2016).

Soon after the selection of GOCE as its first Earth Explorer (1999), ESA started preparatory studies towards a Next-Generation Gravity Mission (NGGM), a future project under study as a successor of ESA’s first gravity mission, GOCE, and of the NASA-DLR joint missions, GRACE and GRACE-FO. Since the mid-2000’s, these studies have focused on a mission concept devoted to sustained observation and quantification of mass transport processes in the Earth system over a decade-long time span. The NGGM mission and system design will build on the experience of GOCE (in particular for the design of the attitude and orbit controls), GRACE (for the utilization of the SST measurement technique between two spacecrafts flying in low Earth orbit) and GRACE-FO (for the utilization of the laser interferometry as inter-satellite metrology). The temporal resolution in sampling Earth’s gravity is enhanced by having two pairs of satellites in the so-called Bender constellation (Bender et al. 2008): one pair in a near-polar orbit and one pair in a medium-inclination orbit (Fig. 2). By using two pairs of satellites, a homogenous coverage of the Earth is achieved in a short time, so that gravity field solutions better than those provided by GRACE on a monthly basis are possible even with a period of three days.

Fig. 2
figure 2

Next-Generation Gravity Mission (NGGM) constellation geometry

In recent years, the optimization of the satellite constellation for NGGM has been the subject of a series of dedicated ESA-funded studies. The SC4MVG study (Iran Pour et al. 2015) analysed criteria for orbit selection, provided a first shortlist of candidate orbits and estimated the associated performance in the retrieval of the gravity field. In particular, the Bender constellation emerged as the formation of choice. The ADDCON study (Pail et al. 2018) continued the numerical simulation and assessment of selected scenarios with varying orbit altitude, repeat periods and resulting space–time sampling behaviour. In the end, Gravitational Seismology (Sabadini et al. 2019) has been studied as a new application of the NGGM. Starting from the SC4MVG and ADDCON results, this later study aimed at quantifying the overall performance of the NGGM in terms of observational errors affecting the recovered gravity field and at assessing the detectability of the earthquake gravity signatures, as well as of those solid Earth processes responsible for them, such as active tectonic processes (subduction, collision and extension; Marotta et al. 2020) and the strain accumulation during the inter-seismic phase (Cambiotti et al. 2018).

In the present work, after having evaluated the expected performance provided by the NGGM and after having designed a simulator for generating NGGM synthetic data during a mission lifetime of almost 12 years, we discuss the detectability of the co- and post-seismic gravity signatures associated with elastostatic deformation and stress relaxation by viscous flow caused by earthquakes. Real gravity data from GRACE and GOCE missions have already been exploited for studying mass rearrangement caused by megathrust earthquakes, such as the 2004 Sumatra and 2011 Tohoku earthquakes (De Linage et al. 2009; Cambiotti and Sabadini 2013; Fuchs et al. 2016; Cambiotti 2020) and, in some cases, earthquakes of magnitude as low as \(M_{\mathrm {W}}=8.3\) have been detected as well (Chao and Liau 2019). Hereinafter, in order to investigate the potential of the NGGM for seismological purposes, we assess the lowest earthquake magnitude threshold detectable by this new satellite mission by considering earthquakes of magnitude as low as \(M_{\mathrm {W}}=7\), comparable with the 1980 Irpinia intraplate earthquake.

In this perspective, in Sect. 2, we will discuss in more detail the NGGM scenario and the performance of the on-board accelerometers and the laser interferometry system. In Sect. 3, we will describe the NGGM simulation environment and the main contributions to the expected observational errors. Finally, in Sects. 4 and 5, we will analyse the earthquake gravity signatures and their detectability, both for the ideal case in which they must be discriminated from the observational errors only and for the more realistic case in which they must be discriminated also from other time-variable geophysical processes.

2 Next-Generation Gravity Mission

The reference orbits for the considered NGGM follow the 8th scenario of the ADDCON study (Pail et al. 2018). The parameters used in our simulations are detailed in Table 1. Besides the compromise between the sensitivity to Earth’s gravity (the lower the better) and the mission lifetime (the higher the longer, for a given amount of propellant that can be carried on board), the satellite altitudes were selected to achieve a near-repeat orbit, with both polar and inclined ground track pattern shifting by \(1.3^{\circ }\) every 7 days. Although the right ascension of the ascending node of the orbits drift at different rates, this results in a ground track pattern over a 7-day interval (Fig. 3) that repeats identically with a 7-day period. By using ion thrusters for orbit maintenance and drag-free control over the complete lifetime, the required propellant is estimated to be about 13% of the spacecraft mass. The chosen inclinations also allow a dense sampling of the lower latitudes to be achieved while maintaining a good coverage of high latitudes for fulfilling the scientific interests concerning ice and water mass transport phenomena.

Table 1 Orbit parameters used in the NGGM simulations
Fig. 3
figure 3

Ground track coverage over a 7-day period of two satellite pairs with \(89^\circ\) and \(70^\circ\) orbital inclinations (blue and green lines, respectively)

The relative distance between the satellites of each pair is driven by the required spatial resolution and precision in the measurement of Earth’s gravity field: \(d={100}\,\hbox {km}\) is the current baseline. The accuracy of Earth’s gravity field model obtained by SST depends, among other factors, on the precise measurement of the inter-satellite distance variation, \(\varDelta d\), and the non-gravitational accelerations acting on the spacecraft \(\varDelta {{\ddot{d}}}_{\mathrm {D}}\). The NGGM will use a laser interferometer as primary payload for measuring \(\varDelta d\). More specifically, a Michelson-type interferometer in the heterodyne version has been selected as particularly suitable for measurements over very long distances (Bonino et al. 2017). It will operate with a continuous-wave source at 1064 nm wavelength, stabilized in frequency against the physical length of a high finesse optical cavity made of ultra-low expansion material, thermally insulated and controlled. The ultimate limiting factor of the interferometer performance is in fact the stability of the laser source frequency. From the laser frequency noise measured by test within the development of the high stability laser for NGGM (Fitzau et al. 2015) and from the metrology performance budget, the measurement error spectral density of \(\varDelta d\) shown in Fig. 4a is estimated.

Fig. 4
figure 4

a Spectral density of the measurement error of the inter-satellite distance variation, \(\varDelta d\). b Spectral density of the measurement error of the relative non-gravitational acceleration between the satellites, \(\varDelta {\ddot{d}}_{\mathrm {D}}\) (dashed line), and of the second time derivative of the inter-satellite distance variation due to the gravity field only, \(\varDelta {\ddot{d}}_{\mathrm {G}}\) (grey line). For the sake of comparison, the second time derivative of the inter-satellite distance variation, \(\varDelta {\ddot{d}}\) (dotted line), is also shown

In the current baseline configuration of NGGM, each satellite carries two accelerometers for measuring the non-gravitational linear acceleration of its centre of mass (CoM). They are symmetrically arranged around the satellite CoM, where the optical retro-reflector of the laser interferometer is physically located.

The reference accelerometer currently identified for NGGM is the “MicroSTAR” model, recently developed by ONERA (Phuong-Anh et al. 2015). It is based on the same measurement principle as the GOCE accelerometers, but it makes use of a cubic proof mass and a design of the capacitive sensors that provides a measurement of the linear and angular accelerations with nearly equal performance about the three sensitive axes. From the performance of this accelerometer, the measurement error spectral density of relative non-gravitational acceleration between the satellites, \(\varDelta \ddot{d}_{\mathrm {D}}\), shown in Fig. 4b is estimated.

The measurement error of the distance variation between the satellite CoM produced solely by the gravity field, \(\varDelta d_{\mathrm {G}} = \varDelta d - \varDelta d_{\mathrm {D}}\), can be then obtained by combining the noise spectral density distance variation, \(\varDelta d\), with that of the relative acceleration, \(\varDelta \ddot{d}_{\mathrm {D}}\), integrated twice, as shown in Fig. 4b in terms of accelerations. The resulting measurement performance is limited by the laser interferometer noise at frequencies above \(\sim {2}\,\hbox {mHz}\) and by the accelerometer noise at lower frequencies.

This scenario and expected performance provided by the NGGM constituted the basis for analysing the potentiality of this mission in the new Gravitational Seismology application.

3 NGGM Simulation

The possibility to detect and retrieve earthquake signals from NGGM data is investigated and evaluated in this study via closed-loop simulations. The simulator generates simplified noisy observables of multiple pairs of GRACE-like satellites and recovers subsequently a gravitational field model on a regular basis. Although the measurement principle and the data processing are simplified compared to a full-fledged simulation, such a simulator has demonstrated its capability to provide a good and realistic idea of the performance of a gravity mission, given the instrument noises. As illustrated in Fig. 5, the key observable associated with one pair of satellites is the projection along the line of sight of the difference of gravitational acceleration between their respective centres of mass. This observable is then corrupted by an additive noise corresponding to the contribution of both the on-board accelerometers and the laser interferometry system measuring the variation in distance between the satellites. This noise is generated on the basis of the noise PSD (power spectral density) model specified in Fig. 4b.

Fig. 5
figure 5

Simplified observable, here noted Obs, used in the closed-loop simulator. CoM stands for centre of mass

The orbit parameters used in our NGGM simulation and listed in Table 1 are slightly different from the ones given in the ADDCON study (Pail et al. 2018) . These relatively small corrections allow us to account for the fact that in the simulation environment, only \(J_2\)-perturbed secular orbits are computed while the original orbital parameters were derived for a realistic Earth gravity field developed up to degree and order 120. An along-the-orbit sampling rate of \({0.1}\,\hbox {Hz}\) for the observables has been chosen, corresponding to the sampling rate of GRACE pre-processed data. A higher frequency of \({0.2}\,\hbox {Hz}\) has also been tested, but it did not bring any significant improvement on the recovered gravity field. In the simulation, we assumed a mission lifetime of almost 12 years resulting in a total of 156 recovered gravity fields, each based on 28 days of data (in comparison, 163 monthly solutions were recovered from GRACE data over a period of 15 years). Considering that we aim at recovering the earthquake gravity signature, which is characterized by a spatial scale comparable with the fault dimension (a few tens of kilometres and more), we decided to recover the gravity field up to spherical harmonic (SH) degree and order 140.

The background gravity field observed in the simulation environment consists of the sum of a static gravity field (EIGEN6c4; Förste et al. 2014) and a time-variable gravity field. The latter includes the co- and post-seismic gravity signatures of the earthquakes that we aim to detect (and that we are going to discuss in Sect. 4) and the contribution of five different Earth’s compartments: the atmosphere (A), the oceans (O), the hydrosphere (H), the ice sheets and caps (I) and the Solid Earth (S). The A, O, H, I and S (hereafter simply abbreviated AOHIS) contributions are computed from the updated ESA Earth System Model (ESA-ESM; Dobslaw et al. 2015), which comes together with a de-aliasing product, hereafter noted as DEAL, and a realistic error product (noted as AOerr) degrading DEAL. The de-aliasing product is typically used in the case of real GRACE data processing to directly remove from the observations the rapidly varying gravity signal, mostly from the ocean and the atmosphere, that may lead to aliasing. Since such a product is based essentially on models, it may contain errors that are not negligible and that must be taken into account. In our simulations though, we divided the amplitude of this error by 2. This somewhat naïve reduction in the error is motivated by the fact that, for NGGM, a promising post-processing method, namely the Wiese approach (Wiese et al. 2011), allows us to retrieve most of the AOHIS signal, making use of a dealiasing product (DEAL+AOerr) irrelevant (Daras and Pail 2017). Essentially, the method consists in estimating along with the monthly model a series of low-degree gravity models over shorter periods, typically 1–2 days, leading to a reduction in the temporal aliasing of the otherwise undersampled AO signal. Such a “self-dealiasing” method is not implemented in our simulation environment, but rather than simply cancelling the Aoerr, we have decided to keep a conservative halved value.

The general approach to estimate the 28-day gravity field solution relies on iterative weighted least squares. The iteration permits us to take into account the colour noise in the weighting by successively approximating its PSD from the residuals. More specifically, the estimation begins with an unweighted least-squares adjustment, from which the residuals are computed. Based on an estimate of the residuals PSD, a whitening filter is designed and applied to the observations and to the design matrix. A second least-squares adjustment is then carried out and so on and so forth. Our computations show, however, that one iteration is actually enough and further iterations improve only marginally the results of the unweighted adjustment (Fig. 6).

Fig. 6
figure 6

Square root of the PSD of the residuals after the unweighted least squares (LS) and after the first iteration for both a the polar and b the inclined orbits

Figure 6 shows the square root of the PSD of the residuals after the unweighted least squares and after the first iteration for both the polar and the inclined orbits. Clearly, the instrumental noise is not the limiting source of error, except at the very end of the spectrum. Furthermore, one can see that the improvement after a first iteration at the level of the residuals is marginal. The residuals are actually dominated by the rapidly varying (with a period smaller than 28 days) time-variable gravity field which is not captured by the 28-day recovered static gravity field. By definition, indeed, the geodetic problem assumes that the gravity field is constant over each recovered period, while it is not. Two extra simulations confirm this idea. First, when only a static gravity field is considered as a background, the residuals computed after a least-squares adjustment have the same PSD as the prescribed instrumental noise (not shown here). This proves that the simulation workflow and the adjustment are correct. Second, when only the \((\mathrm {AOHIS} - \mathrm {DEAL} - \mathrm {AOerr}/2)\) is used as a background gravity field, without the earthquake signatures, the residuals show exactly the same PSD as shown in Fig. 6. Therefore, one can conclude that a better noise modelling will not reduce the effect of what can be regarded as the aliasing of un-modelled signals. Nevertheless, it indicates that reducing the duration of one solution, from 28 days to 7 or 14 days would reduce this effect, with the drawback that a less dense coverage of the Earth’s surface would further limit the spatial resolution of the NGGM. The trade-off between the duration of the recovered period and the spatial resolution and the optimal choice for the detection of the earthquake signatures will be investigated in future studies.

In order to quantify the quality of the recovered gravity fields by the NGGM and the amplitude of the observational errors, we have also considered the true error, defined here as the difference between the recovered gravity field and the background gravity field used in the NGGM simulation and averaged over each 28-day period. This is shown in Fig. 7a in terms of the root mean square per SH degree (degree-RMS) of the gravity disturbances of the true error, both before and after the first iteration of the weighted least squares. A first iteration improves only, and marginally, the degree-RMS for degree larger than 100. Finally, the influence of the Atmosphere Ocean de-aliasing error on the true error has also been investigated to get a clearer view of the room for improvement. As shown in Fig. 7b, halving the nominal, realistic de-aliasing product error reduces noticeably the true error, in particular for degrees between 20 and 100.

Fig. 7
figure 7

a Degree-RMS of the true error of 16 recovered solutions before (green) and after (red) the first iteration of the weighted least squares. b Degree-RMS of the true error for the first 4 recovered solutions when no AOerr is included (blue), half of the nominal AOerr is included like in our main simulation (red) and the full nominal AOerr product is included (green). For comparison, the degree-RMS of the corresponding AOHIS signal is also plotted (grey)

4 Earthquake Gravity Signatures

In order to investigate a wide set of earthquake scenarios to be included in the background gravity model feeding the NGGM simulation, we have considered different combinations of the following six features

  1. 1

    focal mechanism: normal, inverse or strike slip (with dip and rake angles of \({60}^{\circ }\) and \({270}^{\circ }\), \({30}^{\circ }\) and \({90}^{\circ }\), and \({90}^{\circ }\) and \({0}^{\circ }\), respectively).

  2. 2

    line of strike: parallel, perpendicular or oblique (\({45}^{\circ }\)) to the polar orbit.

  3. 3

    depth of the top edge of the fault: shallow or deep (0 or 20 km, respectively).

  4. 4

    occurrence time: at the beginning (2–4 years), at the middle (5–7 years) or at the end (8–10 years) of the NGGM operational period.

  5. 5

    location: inland, offshore or close to the coastline (within 300 km but always offshore).

  6. 6

    rheological stratification: asthenospheric viscosities of \(10^{18}\) or \(10^{19}\,\hbox {Pas}\).

For strike-slip earthquakes, in light of the peculiar (quadrupolar) symmetry of their gravity disturbances, we only consider lines of strike parallel or oblique to the polar orbit. The perpendicular case would be similar to the parallel one.

As shown in Fig. 8, considering all the combinations of the six selected features, we obtain 288 earthquakes to which we have added three additional earthquakes in the Mediterranean area for a later investigation of this specific seismogenic zone. Rather than performing one NGGM simulation for each earthquake, we decided to focus on just one simulation in which all the earthquakes are present. On the other hand, to preserve the possibility of studying the detectability of each earthquake separately from the others, we distributed the earthquakes all around the Earth (between \({70}^{\circ }\hbox {S}\) and \({70}^{\circ }\hbox {N}\)) in such a way that the distance between the epicentres is not smaller than \({8}^{\circ }\) (about \({880}\,\hbox {km}\)). This minimum distance ensures that the gravity signatures of the different earthquakes do not overlap and, so, that they can be analysed separately from each other.

Fig. 8
figure 8

Spatial distribution of the 291 earthquake scenarios. The three additional earthquakes in the Mediterranean area are circled in green

We have chosen the earthquakes to occur at multiples of 28 days in such a way that they always occur exactly at the beginning (or at the end) of one of the 28-day periods in which we recover the gravity field. In this way and different from the already available GRACE products, we avoid rejecting those NGGM solutions referring to the 28-day periods in which eventually at least one earthquake occurs. In light of this, the occurrence times were randomly chosen among the multiples of 28 days, ranging from the 13th and 130th 28-day periods, guaranteeing at least 1 and 2 years of data before and after the earthquakes in order to detect both the co- and post-seismic phases during a mission lifetime of almost 12 years.

In the end, in order to fully characterize the earthquake forcing, we assigned to each earthquake also the co-seismic slip distribution over the fault. This has been done on the basis of the circular rupture model proposed by Eshelby (1957), who found how the slip should be in an infinite homogeneous elastic space under the assumption of uniform strain along the rupture (see also Kanamori and Anderson (1975)). The co-seismic slip distribution, \(\delta u\), thus takes the following form

$$\begin{aligned} \delta u(r) = \frac{3}{2}\,\delta {\bar{u}} \sqrt{1-\left( \frac{r}{R}\right) ^2} \end{aligned}$$
(1)

where r is the distance from the fault centre. Here, R and \(\delta {\bar{u}}\) are the fault radius and the average slip that we obtain through the following relations

$$\begin{aligned} \delta {\bar{u}}&= \frac{16}{3\,\pi }\,\frac{R\,\delta \sigma }{\mu }\,\frac{1-\nu }{2-\nu } \end{aligned}$$
(2)
$$\begin{aligned} M_\mathrm{S}&= \mu \,\big (\pi \,R^2\big )\,\delta {\bar{u}} \end{aligned}$$
(3)

where \(\nu\) and \(\mu\) are the Poisson and shear moduli, \(M_{\mathrm {S}}\) is the seismic moment and \(\varDelta \sigma\) is the stress drop. Table 2 reports the values of R and \(\delta {\bar{u}}\) for the earthquake magnitudes considered in this study.

Table 2 Fault radius and mean slip for earthquake magnitudes ranging from \(M_{\mathrm {W}}=6.8\) to 8 according to Eqs. (2) and (3)

The modelling of the earthquake gravity signatures has been performed within the framework of spherically symmetric self-gravitating compressible viscoelastic (Maxwell) Earth models (Tanaka et al. 2006; Cambiotti and Sabadini 2015; Sabadini et al. 2016) based on the Preliminary Reference Earth Model (PREM; Dziewonski and Anderson 1981). The lithospheric viscosity is set to \(10^{23}\,\hbox {Pas}\) in the shallowest \({50}\,\hbox {km}\), and below, it decreases according to an exponential model to the value of the asthenospheric viscosity (\(10^{18}\) or \(10^{19}\,\hbox {Pas}\)) at \({100}\,\hbox {km}\) depth. The upper mantle viscosity (down to 220 km depth) is set to \(10^{21}\,\hbox {Pas}\) while the lower mantle viscosity (below the \({670}\,\hbox {km}\) discontinuity not shown here) is set to \({3\times 10^{22}}\,\hbox {Pas}\). The lower asthenospheric viscosity (\(10^{18}\,\hbox {Pas}\)) aims at emphasizing the post-seismic gravity disturbance within the mission lifetime, while the higher one (\(10^{19}\,\hbox {Pas}\)) aims at considering the case of earthquake signatures mainly characterized by the co-seismic (rather than post-seismic) phase. Furthermore, in order to investigate the contribution of the ocean water redistribution induced by seafloor displacements, in addition to the Solid Earth modelling, we have also added a global ocean layer (Cambiotti et al. 2011) for offshore earthquakes.

Such a physico-mathematical model has been already applied successfully for investigating the earthquake signatures at the long wavelengths observed by GRACE and GOCE missions (Cambiotti and Sabadini 2013; Fuchs et al. 2016; Cambiotti 2020) and, so, it has been judged as adequate for the modelling of the earthquake signatures. As for the updated ESA-ESM, the calculations have been performed for the years 1995 until 2006 with a temporal resolution of 6 hours and up to SH degree and order 180. The files listing the hypocentral location, the origin time, the focal mechanism and the rheological stratification of the earthquake scenarios and the modelled gravity signatures (only for the cases of magnitudes \(M_{\mathrm {W}}=7\) and 8) are available to the scientific community for future gravity mission simulation studies at the website https://sites.unimi.it/grav_seismology.

Fig. 9
figure 9

Gravity disturbances due to a\(M_{\mathrm {W}}=7\) earthquake scenarios, b AOHIS contributions, c half of the AOerr and d the true error for the 120th recovered periods of 28 days and up to SH degree and order 140

In order to give a first idea of the earthquake gravity signatures and of the other contributions used in the NGGM simulation, Fig. 9 shows the gravity disturbances associated with the \(M_{\mathrm {W}}=7\) earthquakes, the AOHIS contributions and half of the Atmosphere Ocean de-aliasing error (AOerr/2), averaged over the 120th 28-day period. It also shows the true error, as defined at the end of Sect. 3, for the same 28-day period. The earthquake signatures have amplitudes of about 1\(\,\upmu \hbox {Gal}\) and are larger than those of the de-aliasing error by about a factor of two. Nevertheless, they are smaller than those of the AOHIS contributions and of the true error by more than one order of magnitude. In particular, we note that the true error is characterized by north–south stripes (similar to the typical errors of GRACE data) and can be as high as 40\(\,\upmu \hbox {Gal}\), especially over the continents and at equatorial latitudes. The large extent of the true error, however, does not imply that the earthquake gravity signature will not be detectable. Indeed, the possibility of its detection cannot be simply excluded on the basis of an analysis of the signal-to-noise ratio. Rather, as we are going to discuss in the following sections and as already shown by Cambiotti (2020) for the case of the 2011 Tohoku earthquake using GRACE data, the earthquake signature can become detectable by looking for the specific spatio-temporal pattern of the earthquake signature in the whole data time series. Thus, the more data we have, the smaller signals can be recognized, even smaller than the observational errors.

Figures 10 and 11 show the same gravity disturbances of Fig. 9, but after a Gaussian filtering with filter radii \({1}^{\circ }\) and \({2}^{\circ }\), respectively (Jekeli 1981). It is noteworthy that both spatial filterings damp the true error more than the earthquake signatures and the de-aliasing error. On the other hand, they do not have much impact on the AOHIS contributions, being mainly characterized by long wavelengths. In the perspective of discriminating the earthquake signature also from other geophysical processes and just because we will take into account the error structure of the NGGM synthetic data, we decide not to apply any spatial filtering. Indeed, as far as the data covariance matrix resulting from the solution of the normal equations of the geodetic inversion is reliable, we will consistently give more weight to the long (rather than short) wavelengths of the gravity field, the long wavelengths being more accurate than the short ones. In particular, as already shown by Cambiotti (2020), even the less accurate GRACE data, when supplemented by their covariance matrix, can provide physically sound information on the co-seismic slip of the 2011 Tohoku earthquake, without the need of any spatial filtering, the latter being necessary only to limit the trade-off between the shallow rheological stratification and the trends due to geophysical process other than the earthquake.

Fig. 10
figure 10

As Fig. 9, but after Gaussian filtering with filter radius \({1}^{\circ }\)

Fig. 11
figure 11

As Fig. 9, but after Gaussian filtering with filter radius \({2}^{\circ }\)

5 The Earthquake Detectability

In order to discuss the detectability of the earthquake signature, we will estimate the amplitude of each earthquake signature by NGGM data inversion and we will conclude that the NGGM can detect the earthquake signature depending on the agreement between the estimated and the original amplitudes. In this respect, the inverse problems that we are going to solve are similar to that introduced by Cambiotti (2020) for the case of the 2011 Tohoku earthquake, except for the fact that we do not attempt here to estimate the co-seismic slip distribution and the rheological stratification. Rather, the latter are assumed to be known in advance, but for a scaling factor of the co-seismic slip distribution (or, equivalently, of the seismic moment). In other words, we assume the specific spatio-temporal pattern of the earthquake gravity signature that we are looking for to be known and we fit it to the NGGM data time series in order to estimate its amplitude, as well as the uncertainty of this estimate. This, indeed, is the first issue that we have to address before considering what additional information we can actually get from these new gravity data.

Starting from this general statement of the problem, first we will assume to know in advance also the AOHIS contributions and, so, we will remove them from the NGGM synthetic data. In this respect, the earthquake signature must be discriminated only by the static gravity field and the observational errors. We will refer to this approach as the first inverse problem. Afterwards, we will evaluate also the possibility of discriminating the earthquake signature from other time-variable gravity signals, as those from the AOHIS contributions, which is the real problem that we would face when we deal with real satellite gravity data. For this second inverse problem, the AOHIS contributions are unknown and, similar to Cambiotti (2020), we will include them in the modelling by means of linear trends and annual and semi-annual periodic functions.

Considering that the earthquake signature is mainly localized around the epicentral area, we localize in space the NGGM data time series using the Slepian functions optimally concentrated in a spherical cap of \(4^\circ\) centred at the epicentre of each earthquake (Simons et al. 2006; Cambiotti and Sabadini 2013; Cambiotti 2020). At the NGGM spatial resolution (SH degree \(L=140\)), rather than all the \((L+1)^2=19{,}881\) spherical harmonics required to describe the gravity field everywhere outside the Earth, we only need \(S=23\) Slepian functions (shown in Fig. 12) and we thus obtain 23 time series of Slepian coefficients which fully describe the gravity signal in the selected area.

Fig. 12
figure 12

23 Slepian functions bandlimited to SH degree 140 and optimally concentrated in a spherical cap of \(4^\circ\) (about 440 km; dashed circle). Here, the centre of the cap is arbitrary

In addition to the 156 solutions of the recovered gravity field, the NGGM simulation provides also the error structure that is the covariance matrix of the Stokes coefficients, which results from the inversion of the normal equations. In this respect, we assume that there is only spatial correlation, but no temporal correlation between subsequent solutions. The covariance matrix has been calculated only for one representative solution and used for all the others because it is expected to be stationary in time due to the relative orbit design of the NGGM.

From the covariance matrix of the Stokes coefficients, we also obtain the corresponding \(23\times 23\) covariance matrix of the Slepian coefficients by error propagation. Another simplification adopted is that the NGGM simulation has been made only for the \(M_{\mathrm {W}}=7\) earthquakes. Synthetic gravity data for earthquakes of other magnitudes, instead, have been obtained by simply removing the gravity signatures of the \(M_{\mathrm {W}}=7\) earthquakes and adding those of earthquakes of the desired magnitude. We do not expect that this simplification alters our conclusion because the amplitude of the time-dependent gravity signatures caused by earthquakes is comparable or smaller than other time-variable contributions of the updated ESA-ESM. Furthermore, most of the noise has been shown to come from the temporal aliasing of AOHIS. Compared to the latter, the post-seismic signal is very slow and will not contribute significantly to the error which would remain dominated by AOHIS temporal aliasing.

According to this framework, for each earthquake and 28-day period, we obtain the Slepian coefficients of the NGGM synthetic data, \({\mathbf {y}}_i\), of the AOHIS contributions, \({\mathbf {b}}_i\) and of the earthquake signature, \({\mathbf {q}}_i\), with the subscript i indicating the ith period. Then, we formally define the first inverse problem by the following relation between data and model

$$\begin{aligned} {\mathbf {y}}_{i} -{\mathbf {b}}_i = {\mathbf {s}} + \gamma \,{\mathbf {q}}_i + \varvec{\epsilon }_i \end{aligned}$$
(4)

where \({\mathbf {s}}\) are the Slepian coefficients describing the static gravity field and \(\gamma\) is the scaling factor of the earthquake signature that we aim to estimate and that, by definition, we expect to be one, \(\gamma =1\). Furthermore, \(\varvec{\epsilon }\) are the Slepian coefficients of the observational errors that we assume to obey a Gaussian distribution with zero mean, \({\mathbf {0}}\), and covariance matrix, \(\alpha ^2\,{\mathbf {Y}}\), that is, \(\varvec{\epsilon }\sim N({\mathbf {0}},\alpha ^2\,{\mathbf {Y}})\). In particular, \({\mathbf {Y}}\) is the data covariance matrix provided by the NGGM simulation after spatial localization, and \(\alpha\) is a (positive) hyper-parameter which refines a posteriori the observational errors (Yabuki and Mastu’ura 1992; Cambiotti et al. 2017). This hyper-parameter must be estimated as part of the inversion, jointly with the amplitude of the earthquake signature, \(\gamma\), and the Slepian coefficients describing the static gravity field, \({\mathbf {s}}\).

For the second inverse problem, instead, the relation between data and model reads

$$\begin{aligned} {\mathbf {y}}_{i} = {\mathbf {s}} + {\mathbf {K}}_i\,{\mathbf {m}} + \gamma \,{\mathbf {q}}_i + \varvec{\epsilon }_i \end{aligned}$$
(5)

where \({\mathbf {m}}\) collects the Slepian coefficients describing the linear trends and the annual and semi-annual periodic signals, and \({\mathbf {K}}_i\) is the matrix describing the linear relation between them and the Slepian coefficients of the AOHIS contributions as in Cambiotti (2020). In this second inverse problem, due to the simplistic modelling of the AOHIS contributions, we have to expect that \(\varvec{\epsilon }_i\) includes both observational and modelling errors. Nevertheless, we will continue to assume that the errors obey a Gaussian distribution with zero mean, \({\mathbf {0}}\), and covariance matrix, \(\alpha ^2\,{\mathbf {Y}}\). In light of this, the hyper-parameter \(\alpha\), in addition to refining a posteriori the observational errors, will account for modelling errors as well, although in a simplistic way (Yabuki and Mastu’ura 1992; Cambiotti et al. 2017).

5.1 First Inverse Problem

Before presenting the results for all the earthquakes and in order to better explain the logic behind the first inversion scheme, Eq. (4), we first consider, for the sake of example, the 62nd (normal) and 64th (inverse) earthquakes of \(M_{\mathrm {W}}=7\) occurring in the Mediterranean area (Table 3 for the seismic source parameters; the post-seismic gravity signatures have been calculated using an asthenospheric viscosity of \(10^{18}\) Pa s). The estimates of the scaling factor are \(\gamma =0.96 \pm 0.08\) and \(1.01\pm 0.04\), respectively, and are both consistent with the expected values of 1 within one-sigma error.

Table 3 Parameters of the 62nd and 64th earthquakes

Looking at the fits to the time series of Slepian coefficients (Figs. 13, 14), we note that there are a few Slepian coefficients for which the earthquake signature is (almost) comparable with the observational errors. In particular, for the 62nd earthquakes (Fig. 13), the gravity signature is comparable to (although smaller than) the observational errors only in the 1th and 2th time series of Slepian coefficients, which correspond to Slepian functions describing polar patterns (Fig. 12). For the 64th earthquakes (Fig. 14), instead, this occurs also in the 7th and 8th time series, which correspond to Slepian functions describing north–south dipolar patterns (Fig. 12). Each time series where the earthquake signal is nearly visible, in addition to constraining the respective Slepian coefficient of the static gravity field, contributes to the estimate of the scaling factor. In this respect, leaving aside the quite simple issue of estimating the static gravity field, we can say that, for each earthquake, we have a few time series for constraining just one model parameter and this explains why, at least for these two representative cases, the amplitudes of the earthquake signature have been recovered with relative accuracies of 4 and 1% (that are the differences between the estimated scaling factors and the expected value of 1).

Fig. 13
figure 13

Time series of modelled and observed Slepian coefficients (black and grey lines, respectively) for the 62nd \(M_{\mathrm {W}}=7\) earthquakes and according to the first inverse problem, Eq. (4). The modelled time series thus represent the earthquake gravity signature scaled by the estimated scaling factor \(\gamma\), and the exact AOHIS contributions have been removed by the observed time series. For the sake of visualization, the modelled static gravity field has also been removed and the vertical scale changes among the panels. The number in lower left corner of each panel indicates the Slepian function to which the time series refers (as they are ordered in Fig. 12)

Fig. 14
figure 14

As Fig. 13 but for the 64th \(M_{\mathrm {W}}=7\) earthquakes

In order to get an overall idea of the earthquake detectability, we show in Fig. 15 the estimated scaling factors, \(\gamma\), and their uncertainties, \(\sigma\), in the form of a histogram for the earthquakes of magnitudes 6.8, 7 and 7.2. Furthermore, in order to verify the consistency of these estimates with the expected value of 1 and, at the same time, to verify the quality of the methodology adopted here, we also plot the normalized scaling factors defined as follows

$$\begin{aligned} {\widehat{\gamma }}=(\gamma -1)/\sigma \end{aligned}$$
(6)

We first note that, as it should be for linear inverse problems and as far as the observational errors are Gaussian, the normalized scaling factors closely obey a normal distribution for all the three earthquake magnitudes (panels A,D,G). In particular, they pass the Shapiro–Wilk test (Shapiro and Wilk 1965) with significance levels of 93, 94 and 68%, respectively. In contrast, the scaling factors do not follow a Gaussian distribution, although their distribution is quite symmetric around the expected value and bell shaped (panels B, E, H). This indicates that the estimated scaling factors cannot be compared among them and, indeed, they refer to different earthquake types and geographic contexts, each of which is characterized by specific observational errors. This consideration is also confirmed by the wide range spanned by the estimated uncertainties (panels C, H, I).

It is noteworthy that, for almost two thirds of the \(M_{\mathrm {W}}=7\) earthquakes (181 over 291), the scaling factor has been estimated with a relative accuracy of less than 10% (panel B). Furthermore, for almost all the earthquakes (with 8 exceptions), the relative accuracy is less than 50%. When we consider the \(M_{\mathrm {W}}=6.8\) earthquakes, instead, the relative accuracy of less than 10% is obtained only for about one-third of the cases (108 over 291) and there are 37 cases in which the estimated scaling factor differs by more than 50% from the expectation. In contrast, an accuracy of less than 10% is obtained for more than 80% of the \(M_{\mathrm {W}}=7.2\) earthquakes (254 over 291). In light of these results, we consider the magnitude \(M_{\mathrm {W}}=7\) as the minimum threshold for which earthquake signatures can be detected by the NGGM with a good accuracy in most cases, although under the simplistic assumption of the first inverse problem. This holds regardless of the earthquake’s type and the geographic context.

Similar analyses conducted on subsets of the earthquake scenarios have not led to a clear identification of which of the six earthquake features (Sect. 4) is the determining factor for the earthquake detectability. Also, just because we located each earthquake type at a specific geographic location, it is difficult to understand the impact of the background gravity field, different from one place to another, on the earthquake detectability. Additional NGGM simulations and earthquake scenarios shall be considered in the future to answer more detailed questions.

Fig. 15
figure 15

Distributions of the normalized scaling factors, \({\widehat{\gamma }}\) (left column), of the scaling factors, \(\gamma\) (middle column) and of their uncertainties, \(\sigma\) (right columns), for the earthquakes of magnitudes 6.8, 7 and 7.2 (top, middle and bottom rows, respectively), within the framework of the first inverse problem, Eq. (4). In the a, d and g, the solid line represents the Gaussian distribution estimated from the normalized scaling factors, of which the mean and standard deviation are reported in the top left corners. In the b, e and h, we report the number of earthquakes (within the round brackets) for which the estimated scaling factor differs by less than 100, 50 and 10% from the expected value of 1

5.2 Second Inverse Problem

Let us now consider the more realistic case in which the earthquake signature must be discriminated against other time-variable geophysical processes, like the AOHIS contributions which we now include in the modelling. As discussed in Sect. 5.1, we begin by presenting the results for the 62nd and 64th earthquakes, although we increase their magnitude from 7 to 7.8. The estimates of the scaling factor are \(\gamma =0.96 \pm 0.08\) and \(1.08\pm 0.03\), respectively.

Looking at the fits to the time series of Slepian coefficients (Figs. 16, 17), we note some time series in which the observational errors dominate over the modelled signals and other ones in which the modelling of the AOHIS contributions is far from being accurate and, so, responsible for large modelling errors. On the other hand, there are a few time series where we can recognize a step-like discontinuity due to the co-seismic phase (as, for example, in the 1st and 4th time series of the 62nd earthquake) or a departure from the linear trend due to the post-seismic phase (as, for example, in the 2nd time series of the 62nd earthquake and in the 2nd and 7th time series of the 64th earthquake).

Fig. 16
figure 16

Time series of modelled and observed Slepian coefficients (black and grey lines, respectively) for the 62nd \(M_{\mathrm {W}}=7.8\) earthquake and according to the second inverse problem, Eq. (5). The modelled time series thus represent the earthquake gravity signature scaled by the estimated scaling factor \(\gamma\) and the modelled AOHIS contributions in terms of linear trends and annual and semiannual periodic signals. For the sake of visualization, the modelled static gravity field has been removed and the vertical scale changes among the panels. The number in lower left corner of each panel indicates the Slepian function to which the time series refers (as they are ordered in Fig. 12)

Fig. 17
figure 17

As Fig. 16 but for the 64th \(M_{\mathrm {W}}=7.8\) earthquake

This is better shown in Figs. 18 and 19, after the removal of the modelled AOHIS contributions. For the 62nd earthquake, we note that the residuals are comparable with the earthquake signature and are characterized by temporal correlations mainly attributable to modelling errors of the annual and semiannual periodic signals. These considerations hold also for the case of the 64th earthquake, although its gravity signature, especially the post-seismic one, can be comparable to or even greater than the residuals as, for example, in the 1st, 2nd, 7th and 8th time series.

Fig. 18
figure 18

As Fig. 16, but the modelled AOHIS contributions (i.e., the estimated linear trends and annual and semiannual periodic signals) have been removed in order to focus on the fitting of the earthquake signature

Fig. 19
figure 19

As Fig. 18 but for the 64th \(M_{\mathrm {W}}=7.8\) earthquake

From this visual inspection of the results, we understand that a better modelling of the AOHIS contributions can lead to substantial improvements in the detection of the earthquake signature. For instance, rather than the deterministic approach adopted here, a stochastic model of slow-varying trends and of the annual and semi-annual signals (Didova et al. 2016) would reduce the modelling error, thus making the earthquake signature even more evident, especially for the 62nd earthquake. We do not further pursue this issue in the present work, although we remark that the conclusion that we will draw from the second inverse problem will be conservative and that there is still room for improvements to exploit the most of the NGGM data.

In order to get an overall idea of the earthquake detectability within the framework of the second inverse problem, we consider earthquakes of magnitudes 7.6, 7.8 and 8 and show in Fig. 20 the analysis of the estimated scaling factors. Different from the first inverse problem, the normalized scaling factors obey a Gaussian distribution (according to the Shapiro–Wilk test with a significance level of at least 5%) only at the cost of excluding a few tens of the largest (in absolute value) normalized scaling factors, which we thus consider as outliers. We note also that the means of the estimated Gaussian distributions are close to the expected value of zero, but their standard deviations are about three times greater than the expected value of 1. This indicates that the second inverse problem tends to underestimate the uncertainty of the scaling factor and this is due to the simplistic way in which we take the modelling errors into account, that is, by means of the hyper-parameter \(\alpha\).

Fig. 20
figure 20

As Fig. 15 but for the earthquakes of magnitudes 7.6, 7.8 and 8 (top, middle and bottom rows, respectively), within the framework of the second inverse problem, Eq. (5). The unfilled bars refer to the outliers that we have excluded in order that the remaining normalized scaling factors obey a Gaussian distribution, according to the Shapiro–Wilk test with a significance level of at least the 5%. The numbers of outliers are reported in the legends of the left panels

Despite the difficulties due to the simplistic modelling of the AOHIS contribution adopted here, we note that, for the case of \(M_{\mathrm {W}}=7.8\), more than half of the scaling factors (167 over 291) are estimated with a relative accuracy of less than 10 percent and 86% of them with a relative accuracy of less than 50%. We also note that only one-third of the \(M_{\mathrm {W}}=7.6\) earthquakes (115 over 291) and more than two thirds of the \(M_{\mathrm {W}}=8\) earthquakes (220 over 291) are retrieved with a relative accuracy of less than 10%. In light of these results, we consider the magnitude \(M_{\mathrm {W}}=7.8\) as the minimum threshold for which the earthquake signature can be detected by the NGGM with a good accuracy in most cases. As already argued, this is a conservative threshold and improvements in the modelling of the AOHIS contributions can reduce it further, down to \(M_{\mathrm {W}}=7\) in the ideal case in which the AOHIS contributions can be exactly removed (or, equivalently, modelled).

6 Conclusions

Starting from the SC4MVG (Iran Pour et al. 2015) and ADDCON (Pail et al. 2018) results, we have carried out a simulation of a 12-year-long NGGM to quantify the detectability of the gravitational signal of different types of earthquakes. The gravity field is recovered over 28-day periods and up to SH degree and order 140 by iterative weighted least-squares estimation, which permits us to take into account coloured noise in the weighting by estimating the residual PSD. It should be borne in mind, however, that the simulation output may give slightly optimistic results compared to a full-fledged simulation, in particular since tidal signals were not considered.

We then have investigated the possibility to detect the earthquake signals from NGGM data via closed-loop simulations, where the background gravity field consists of the sum of a static gravity field, the AOHIS contributions of the updated ESA-ESM (Dobslaw et al. 2015) and of the gravity signatures of 291 earthquake scenarios that we have chosen in order to explore a wide range of different types of earthquakes and of rheological stratifications. For each investigated earthquake magnitude, we prescribe the co-seismic slip on the basis of the fault circular model (Eshelby 1957) and calculate the co- and post-seismic gravity signatures using self-gravitating compressible viscoelastic Earth model (Sabadini et al. 2016) based on PREM (Dziewonski and Anderson 1981). We also account for the effects of the ocean water redistribution (Cambiotti et al. 2011) for those earthquakes occurring offshore.

When the earthquake signature must be discriminated from other time-variable processes, as those of the five AOHIS compartments of the updated ESA-ESM, we succeed in estimating its amplitude with a relative error of less than 10% in most cases for magnitude \(M_{\mathrm {W}}=7.8\) and greater. This result is based on a very simplistic modelling of the AOHIS contributions (in terms of linear trends and annual and semiannual periodic signals) and, so, a more realistic modelling can further decrease this minimum magnitude threshold. In particular, we show that, when the AOHIS contributions are exactly removed from the NGGM synthetic data, which can be done since we know the background gravity field in the simulation environment, the minimum magnitude threshold becomes \(M_{\mathrm {W}}=7\). This second result means that earthquake signatures of about \({1}\,\upmu \hbox {Gal}\) (up to SH degree 140), including the co-seismic contribution and the post-seismic one cumulated in a few to several years, can be discriminated from the static gravity field and, especially, from the expected observational errors of the NGGM.

In this respect, it is worthwhile to mention that active tectonics, such as subduction, responsible for the earthquakes along the Pacific fire belt, can be responsible for gravity rates of change of about \(0.008 \,\upmu \hbox {Gal/year}\) for each centimetre of convergence velocity and for young tectonic structures (Marotta et al. 2020), namely for gravity disturbances of \(0.4\,\upmu \hbox {Gal}\) over the NGGM mission lifetime of one decade (assuming an average convergence velocity of 5 cm/year) and, so, comparable with the gravity signature of \(\mathrm{M}_{\mathrm{W}}=7\) earthquakes. In light of this and on the basis of preliminary results still under investigation within the framework of the Gravitational Seismology project, we can expect that NGGM will be able to detect also the gravity signatures of those processes responsible for the earthquake generation, thus contributing to seismic hazard studies.

A step forward compared to the synthetic model of the time-variable gravity disturbances due to the earthquakes here considered (and made available to the scientific community at https://sites.unimi.it/grav_seismology) will be to build it on the basis of earthquake catalogs. In this way, the synthetic model, in addition to being based on real data as the other AOHIS components of the updated ESA-ESM, will also reflect the real seismotectonic contexts in which the earthquakes are generated.

The possibility of studying earthquakes by means of satellite-derived gravity data supplements information on elastic waves from seismometers and crustal displacements from other geodetic techniques, as GNSS and SAR ones, by including those from deep mass rearrangements, regardless of whether they occur inland or over seas, often surrounding inter-plate regions with high seismic activity. Furthermore, when this possibility is extended to earthquakes of even moderate magnitude, as we have just shown this additional information becomes relevant also in intra-plate regions. In this respect, gravity data from NGGM can constitute a new methodology for monitoring both inter- and intra-plate deformations, bringing benefits for seismic hazard assessment worldwide.