the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Tracking slow-moving landslides with PlanetScope data: new perspectives on the satellite’s perspective
Abstract. PlanetScope data with daily temporal and 3-m spatial resolution hold an unprecedented potential to quantify and monitor surface displacements from space. Slow-moving landslides, however, are complex and dynamic targets that alter their topography over time. This leads to orthorectification errors, resulting in inaccurate displacement estimates when images acquired from varying satellite perspectives are correlated. These errors become particularly concerning when the magnitude of orthorectification error exceeds the signal from surface displacement which is the case for many slow-moving landslides with annual velocities of 1–10 m/yr. This study provides a comprehensive assessment of orthorectification errors in PlanetScope imagery and presents effective mitigation strategies for both unrectified L1B and orthorectified L3B data. By implementing these strategies, we achieve sub-pixel accuracy, enabling the estimation of realistic and temporally coherent displacement over landslide surfaces. The improved signal-to-noise ratio results in higher-quality disparity maps, allowing for a more detailed analysis of landslide dynamics and their driving factors.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(31383 KB)
-
Supplement
(32384 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(31383 KB) - Metadata XML
-
Supplement
(32384 KB) - BibTeX
- EndNote
- Final revised paper
Journal article(s) based on this preprint
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2023-1698', Mahmud Muhammad, 06 Sep 2023
Excellent literature review and thorough analysis. I'd like to delve deeper into contemporary methods for determining land displacement using optical images. Often, when discussing this topic, many researchers lean towards image correlation techniques. Your paper adeptly highlighted the constraints of these methods in tracking landslide movements. However, alternative methods exist, such as the optical flow motion, which contrasts image correlation. It's proficient in detecting both minute (on a centimeter scale) and substantial (meter scale) land displacement. Our recent work, Muhammad et al., 2022, explored the principles of optical flow and its efficacy in monitoring landslide movement. We've now broadened the application scope of the optical flow algorithm to pinpoint land deformations at the centimeter level. This development has taken shape in the form of an alpha version Python package called "akhdefo-functions." I'm in the process of creating a video tutorial for it and am eager to partner and offer guidance. I'm optimistic about the community's involvement in refining these methods further.
Citation: https://doi.org/10.5194/egusphere-2023-1698-CC1 -
CC2: 'Reply on CC1', Mahmud Muhammad, 06 Sep 2023
The below video explains well the use and differences of optical flow vs template matching/image correlation methods.
https://youtu.be/VSSyPskheaECitation: https://doi.org/10.5194/egusphere-2023-1698-CC2 -
AC1: 'Reply on CC1', Ariane Mueting, 07 Oct 2023
Thank you for your comment. Indeed, optical flow presents a viable alternative to traditional image cross-correlation techniques. We will mention alternative matching approaches in the revised manuscript. The focus of our work, however, is on inherent distortions in the input imagery and how these can be mitigated. If orthorectification errors are present, they will also affect displacement estimates derived with optical flow. Here is a comparison of disparity maps across the Siguas landslide obtained with image cross-correlation and optical flow (OpenCV implementation) for an image pair that was acquired 10 days apart, so the surface can be assumed to be stable:
Citation: https://doi.org/10.5194/egusphere-2023-1698-AC1
-
CC2: 'Reply on CC1', Mahmud Muhammad, 06 Sep 2023
-
RC1: 'Comment on egusphere-2023-1698', Pascal Lacroix, 05 Oct 2023
General comments:
This study by Mueting and Bookhagen is part of a logical sequence of studies (Kaab et al., 2017; Feng et al., 2020; Mazzanti et al., 2020; Aati et al., 2020; Aati et al., 2022...) on the processing of Cubesat Planetscope high-frequency data, and in particular on the improvement of orthorectification errors for the measurement of ground displacements, applied here to landslides. This is a very important subject, given the usefulness of these data, particularly for monitoring landslides (Bradley et al. 2019; Mazzanti et al. 2020; Dille et al., 2021; Amici et al., 2022; Lacroix et al., 2023,....).
The work carried out here is in-depth and deserves to be published. However, in its current state, the article suffers from several limitations, the main ones being: (1) the flow of the article is not always easy to follow and therefore deserves to be reorganised and rewritten partly, (2) there is a lack of quantitative data to validate the different approaches proposed, (3) the discussion on the comparison of the different approaches proposed is lacking, and shows the need to move towards a more systematic quantification of errors.
I think it should be possible to publish this study in Esurf once these major comments and all the other minor points have been taken into account.
Pascal Lacroix
Major comments
1) The reading flow is not easy to follow, due to (a) long descriptions of methods, which could be greatly shortened (in addition, a general scheme at the beginning of section 4 would certainly be very useful to explain your processing chain from L3B or L1B images), (b) part of the description of the results included in the figure legends, (c) (too) many descriptive figures (I think a better selection of figures should be made. For example, figures 5 do not add much to understanding and can be placed in the supplementary material. Figures 4 and 6 illustrate the same effect), (d) the method section also includes results.
2) The validation of the results is not really quantitative. Statistics should at least be provided to show the improvement of the different steps, for all sets of images. Here, a single histogram is shown for a correlation between 2 images (Figure 14). Why not extract a statistic (SD for example) from this histogram and compare it for the different steps for all pairs?
In addition, you could provide more quantitative validation by comparing your results with field measurements, which exist at least on the Siguas landslide (Lacroix et al., 2019).
3) The authors use 2 different processing approaches to extract displacement fields, using L3B or L1B images. I think there is a lack of clear discussion on which of these 2 approaches is more efficient. This discussion should be based on a more quantitative assessment of the errors on each of the processed pairs (see my previous comment). For me, this discussion should also include a systematic analysis (for all pairs) of subframe misalignment errors. The authors claim that they are reduced in the latest acquisitions. Could the author clarify why they have come to this conclusion, and when this improvement was made?
Detailed comments
L15: "geoscientific": I would rather say "geomorphic".
L36-37: "landslides are prone to orthorectification errors": It would be useful to quantify this orthorectification error. I suggest reviewing all the uncertainties associated with the use of PlanetScope data for landslide studies (Bradley et al. 2019; Mazzanti et al. 2020; Dille et al., 2021; Amici et al., 2022; Lacroix et al., 2023, ...).
L75: It would be interesting if you also mentioned that monitoring already exists on the Siguas landslides, which could be used to validate your results (Lacroix et al., 2019). See my main comment no. 1.
2.1: Are there independent estimates of the speed of the Del Medio landslide?
Figure 1: As things stand, the black and blue lines mentioned in the legend are difficult to see. Is there any real point in showing the road network?
L110: "NIR measurements are stored at the green pixels of the RGB Bayer-mask (Planet, 2022a). "This is not clear. Besides, is there any point in knowing this information? In general, I think authors should simplify their text to make it easier to read (see my main comment No. 1).
L114: "NIR band is captured at a different time": Can you specify what the timeframe is?
Figure 3: This is a nice figure to show the different errors. I would simply reverse the order of the legend so that it corresponds to the order of the sub-figures: (1) DEM error (A), (2) striping errors (B, C, D), (3) overall shifts between scenes (C, A), (4) stereoscopic errors (D).
L155-156: It should be noted that the error associated with a global offset is classically corrected for slow slide studies using PlanetScope, which significantly reduces the errors (see also my comment on the uncertainty associated with PlanetScope images of slow slides l36-37).
L240: L1B images are also available in clipped format.
Figure 5 : « Scenes acquired from an opposite view direction at high view angles are
strongest affected by orthorectification errors. » : Opposite view direction to what ? why should orthorectification errors be stronger with some specific viewing angles ? This sentence is not clear.
Furthermore, Figure 5 may not be necessary to understand the study. In fact, it is mentioned only once in the text. Could you place this figure in the supplements?
Lines 250-287: This section can really be reduced. Figure 4 illustrates this well. I also wonder if this section should not be mixed with section 3.1, when the effect of orthorectification errors is illustrated in figure 2. In this section, you do not propose a method for reducing this error, but you do illustrate it. In my opinion, it should not be included in the "data and methods section".
Reorganisation: Lines 288 to 313: I get the impression that this section is a bit vague and that the flow is not easy to follow because it's a mixture of methods and results. If I understand you correctly, you identify the acquisition parameter that allows you to form pairs and reduce uncertainties while correlating them. I have the impression that you could state this much more clearly and separate the methods from the application to the data. The choice of figures also makes things less easy to follow: Figure 6 is closely related to Figure 4 in terms of illustrating the problem (perhaps one of the two figures could be placed in the supplementary material?) Figure 7 is an application of the methods that shows the important effect of the actual azimuth of observation. Figure 8 shows your results once the groups have been created.
In the same section, it is not clear, once your pairs are created, how you will use them to create a time series of movements. In fact, there is no possible relationship between the different groups of images. How do you put them back together? I have the impression that a general diagram at the beginning of section 4 would be useful to explain your processing chain based on images L3B or L1B.
Line 375: I have the impression that the method you describe has already been described and used by Berthier et al (2007). You can certainly simplify your text by refering to it.
Line 394: MPIC-OPT is not strictly a correlator but a processing chain that does more than correlate. The correlator behind MPIC-OPT is Mic-Mac (Rupnik et al., 2017).
Line 399: Why do you use 35x35 pixel windows? Did you do several trials before choosing?
Line 399: I would also delete "slightly larger correlation".
Figure 11: There is no reference to sub-graphs C and D.
Line 417: You mention striping effect due to the misalignment of the subplots, but I assume that your polynomial (line 420) does not correct this effect. How effective is your polynomia at correcting other artefacts when you have such effects?
Line 435: The speed of the landslide may vary over time, but you are assuming here that the speed is constant over the period in question. This needs to be made clear. In fact, you mention this transient in the caption to Figure 12. This assumption could be verified by comparing satellite measurements with field data. These validation data are available on the Siguas landslide (Lacroix et al., 2019). See also my following comment on the results validation.
By the way, your Figure 13 seems to show that the standard deviation of velocity is significantly reduced when using manually orthorectified 1B products compared to correlating 3B products from the same group. To me, this means that the higher standard deviation observed on the landslide in Figure 12 does not come from transient motion but rather from orthorectification errors in the 3B products, even if you select the pairs. Can you comment on this point?
Line 436: remove an "only".
Figure 15: I don't quite understand how you obtain these time series. I understand that you correlate either the L1B data from the same group or the L3B data from group 4, but do you correlate them all within each group or do you only correlate those that are separated by the shortest time to see the transients?
Line 541 : The correct reference is Lacroix et al., 2019 not 2015
Line 550-553 : Are you removing the low quality pixels from the « good pixel map » ? In this case it should highly remove the changes in soil occupation, and therefore the errors. Are you not sure that the higher standard deviation with time in the Siguas case study is not caused by the motion of the landslide that occupies a quite important area of the image ?
Rather than a hypothetic section on the « Transferability to other regions and targets », I would have rather see a discussion on which of the L1B or L3B processing should we use (See my major comment n°3). From the Figures you show, manually orthorectified L1B sounds more efficient, except for the sub-frames alignment. However it lacks this analysis for all the scenes processed. Furthermore, is the sub-frame alignment really better now ?
Figure 15 and lines 610-612 : Can you explain how you obtain the shown uncertainties and add uncertainties on your velocity estimations ?
Citation: https://doi.org/10.5194/egusphere-2023-1698-RC1 -
AC2: 'Reply on RC1', Ariane Mueting, 13 Nov 2023
Dear Pascal Lacroix,
Thank you for this detailed review of our work. The suggestions were well taken. In summary, we have performed the following step to accommodate your points:
- The manuscript has been restructured and the method section has been shortened.
- We re-arranged figures, moved figures to the supplementary material, and created new figures to better show our scientific results.
- We extended the quantitative assessment of the proposed correction method.
- We added an analysis of the magnitude of orthorectification errors, as well as a discussion about the benefits and drawbacks of both L1B and L3B data for offset tracking.
Please find our detailed response to your individual comments attached.
Sincerely,
Ariane Muetingon behalf of the authors
-
AC2: 'Reply on RC1', Ariane Mueting, 13 Nov 2023
-
RC2: 'Comment on egusphere-2023-1698', Shashank Bhushan, 16 Oct 2023
Dear authors,
Please find my comments in the attached pdf.
-
AC3: 'Reply on RC2', Ariane Mueting, 14 Nov 2023
Dear Shashank Bhushan,
Thanks a lot for this elaborate and constructive review of our work. We have carried out the following main changes:
- The manuscript has been restructured and the method section has been shortened.
- We re-arranged figures, moved figures to the supplementary material, and created new figures to better show our scientific results.
- We extended the quantitative assessment of the proposed correction method.
- We added an analysis of the magnitude of orthorectification errors, as well as a discussion about the benefits and drawbacks of both L1B and L3B data for offset tracking.
Please find our responses to each individual comment attached.
Sincerely,
Ariane Mueting
on behalf of the authors
-
AC3: 'Reply on RC2', Ariane Mueting, 14 Nov 2023
Interactive discussion
Status: closed
-
CC1: 'Comment on egusphere-2023-1698', Mahmud Muhammad, 06 Sep 2023
Excellent literature review and thorough analysis. I'd like to delve deeper into contemporary methods for determining land displacement using optical images. Often, when discussing this topic, many researchers lean towards image correlation techniques. Your paper adeptly highlighted the constraints of these methods in tracking landslide movements. However, alternative methods exist, such as the optical flow motion, which contrasts image correlation. It's proficient in detecting both minute (on a centimeter scale) and substantial (meter scale) land displacement. Our recent work, Muhammad et al., 2022, explored the principles of optical flow and its efficacy in monitoring landslide movement. We've now broadened the application scope of the optical flow algorithm to pinpoint land deformations at the centimeter level. This development has taken shape in the form of an alpha version Python package called "akhdefo-functions." I'm in the process of creating a video tutorial for it and am eager to partner and offer guidance. I'm optimistic about the community's involvement in refining these methods further.
Citation: https://doi.org/10.5194/egusphere-2023-1698-CC1 -
CC2: 'Reply on CC1', Mahmud Muhammad, 06 Sep 2023
The below video explains well the use and differences of optical flow vs template matching/image correlation methods.
https://youtu.be/VSSyPskheaECitation: https://doi.org/10.5194/egusphere-2023-1698-CC2 -
AC1: 'Reply on CC1', Ariane Mueting, 07 Oct 2023
Thank you for your comment. Indeed, optical flow presents a viable alternative to traditional image cross-correlation techniques. We will mention alternative matching approaches in the revised manuscript. The focus of our work, however, is on inherent distortions in the input imagery and how these can be mitigated. If orthorectification errors are present, they will also affect displacement estimates derived with optical flow. Here is a comparison of disparity maps across the Siguas landslide obtained with image cross-correlation and optical flow (OpenCV implementation) for an image pair that was acquired 10 days apart, so the surface can be assumed to be stable:
Citation: https://doi.org/10.5194/egusphere-2023-1698-AC1
-
CC2: 'Reply on CC1', Mahmud Muhammad, 06 Sep 2023
-
RC1: 'Comment on egusphere-2023-1698', Pascal Lacroix, 05 Oct 2023
General comments:
This study by Mueting and Bookhagen is part of a logical sequence of studies (Kaab et al., 2017; Feng et al., 2020; Mazzanti et al., 2020; Aati et al., 2020; Aati et al., 2022...) on the processing of Cubesat Planetscope high-frequency data, and in particular on the improvement of orthorectification errors for the measurement of ground displacements, applied here to landslides. This is a very important subject, given the usefulness of these data, particularly for monitoring landslides (Bradley et al. 2019; Mazzanti et al. 2020; Dille et al., 2021; Amici et al., 2022; Lacroix et al., 2023,....).
The work carried out here is in-depth and deserves to be published. However, in its current state, the article suffers from several limitations, the main ones being: (1) the flow of the article is not always easy to follow and therefore deserves to be reorganised and rewritten partly, (2) there is a lack of quantitative data to validate the different approaches proposed, (3) the discussion on the comparison of the different approaches proposed is lacking, and shows the need to move towards a more systematic quantification of errors.
I think it should be possible to publish this study in Esurf once these major comments and all the other minor points have been taken into account.
Pascal Lacroix
Major comments
1) The reading flow is not easy to follow, due to (a) long descriptions of methods, which could be greatly shortened (in addition, a general scheme at the beginning of section 4 would certainly be very useful to explain your processing chain from L3B or L1B images), (b) part of the description of the results included in the figure legends, (c) (too) many descriptive figures (I think a better selection of figures should be made. For example, figures 5 do not add much to understanding and can be placed in the supplementary material. Figures 4 and 6 illustrate the same effect), (d) the method section also includes results.
2) The validation of the results is not really quantitative. Statistics should at least be provided to show the improvement of the different steps, for all sets of images. Here, a single histogram is shown for a correlation between 2 images (Figure 14). Why not extract a statistic (SD for example) from this histogram and compare it for the different steps for all pairs?
In addition, you could provide more quantitative validation by comparing your results with field measurements, which exist at least on the Siguas landslide (Lacroix et al., 2019).
3) The authors use 2 different processing approaches to extract displacement fields, using L3B or L1B images. I think there is a lack of clear discussion on which of these 2 approaches is more efficient. This discussion should be based on a more quantitative assessment of the errors on each of the processed pairs (see my previous comment). For me, this discussion should also include a systematic analysis (for all pairs) of subframe misalignment errors. The authors claim that they are reduced in the latest acquisitions. Could the author clarify why they have come to this conclusion, and when this improvement was made?
Detailed comments
L15: "geoscientific": I would rather say "geomorphic".
L36-37: "landslides are prone to orthorectification errors": It would be useful to quantify this orthorectification error. I suggest reviewing all the uncertainties associated with the use of PlanetScope data for landslide studies (Bradley et al. 2019; Mazzanti et al. 2020; Dille et al., 2021; Amici et al., 2022; Lacroix et al., 2023, ...).
L75: It would be interesting if you also mentioned that monitoring already exists on the Siguas landslides, which could be used to validate your results (Lacroix et al., 2019). See my main comment no. 1.
2.1: Are there independent estimates of the speed of the Del Medio landslide?
Figure 1: As things stand, the black and blue lines mentioned in the legend are difficult to see. Is there any real point in showing the road network?
L110: "NIR measurements are stored at the green pixels of the RGB Bayer-mask (Planet, 2022a). "This is not clear. Besides, is there any point in knowing this information? In general, I think authors should simplify their text to make it easier to read (see my main comment No. 1).
L114: "NIR band is captured at a different time": Can you specify what the timeframe is?
Figure 3: This is a nice figure to show the different errors. I would simply reverse the order of the legend so that it corresponds to the order of the sub-figures: (1) DEM error (A), (2) striping errors (B, C, D), (3) overall shifts between scenes (C, A), (4) stereoscopic errors (D).
L155-156: It should be noted that the error associated with a global offset is classically corrected for slow slide studies using PlanetScope, which significantly reduces the errors (see also my comment on the uncertainty associated with PlanetScope images of slow slides l36-37).
L240: L1B images are also available in clipped format.
Figure 5 : « Scenes acquired from an opposite view direction at high view angles are
strongest affected by orthorectification errors. » : Opposite view direction to what ? why should orthorectification errors be stronger with some specific viewing angles ? This sentence is not clear.
Furthermore, Figure 5 may not be necessary to understand the study. In fact, it is mentioned only once in the text. Could you place this figure in the supplements?
Lines 250-287: This section can really be reduced. Figure 4 illustrates this well. I also wonder if this section should not be mixed with section 3.1, when the effect of orthorectification errors is illustrated in figure 2. In this section, you do not propose a method for reducing this error, but you do illustrate it. In my opinion, it should not be included in the "data and methods section".
Reorganisation: Lines 288 to 313: I get the impression that this section is a bit vague and that the flow is not easy to follow because it's a mixture of methods and results. If I understand you correctly, you identify the acquisition parameter that allows you to form pairs and reduce uncertainties while correlating them. I have the impression that you could state this much more clearly and separate the methods from the application to the data. The choice of figures also makes things less easy to follow: Figure 6 is closely related to Figure 4 in terms of illustrating the problem (perhaps one of the two figures could be placed in the supplementary material?) Figure 7 is an application of the methods that shows the important effect of the actual azimuth of observation. Figure 8 shows your results once the groups have been created.
In the same section, it is not clear, once your pairs are created, how you will use them to create a time series of movements. In fact, there is no possible relationship between the different groups of images. How do you put them back together? I have the impression that a general diagram at the beginning of section 4 would be useful to explain your processing chain based on images L3B or L1B.
Line 375: I have the impression that the method you describe has already been described and used by Berthier et al (2007). You can certainly simplify your text by refering to it.
Line 394: MPIC-OPT is not strictly a correlator but a processing chain that does more than correlate. The correlator behind MPIC-OPT is Mic-Mac (Rupnik et al., 2017).
Line 399: Why do you use 35x35 pixel windows? Did you do several trials before choosing?
Line 399: I would also delete "slightly larger correlation".
Figure 11: There is no reference to sub-graphs C and D.
Line 417: You mention striping effect due to the misalignment of the subplots, but I assume that your polynomial (line 420) does not correct this effect. How effective is your polynomia at correcting other artefacts when you have such effects?
Line 435: The speed of the landslide may vary over time, but you are assuming here that the speed is constant over the period in question. This needs to be made clear. In fact, you mention this transient in the caption to Figure 12. This assumption could be verified by comparing satellite measurements with field data. These validation data are available on the Siguas landslide (Lacroix et al., 2019). See also my following comment on the results validation.
By the way, your Figure 13 seems to show that the standard deviation of velocity is significantly reduced when using manually orthorectified 1B products compared to correlating 3B products from the same group. To me, this means that the higher standard deviation observed on the landslide in Figure 12 does not come from transient motion but rather from orthorectification errors in the 3B products, even if you select the pairs. Can you comment on this point?
Line 436: remove an "only".
Figure 15: I don't quite understand how you obtain these time series. I understand that you correlate either the L1B data from the same group or the L3B data from group 4, but do you correlate them all within each group or do you only correlate those that are separated by the shortest time to see the transients?
Line 541 : The correct reference is Lacroix et al., 2019 not 2015
Line 550-553 : Are you removing the low quality pixels from the « good pixel map » ? In this case it should highly remove the changes in soil occupation, and therefore the errors. Are you not sure that the higher standard deviation with time in the Siguas case study is not caused by the motion of the landslide that occupies a quite important area of the image ?
Rather than a hypothetic section on the « Transferability to other regions and targets », I would have rather see a discussion on which of the L1B or L3B processing should we use (See my major comment n°3). From the Figures you show, manually orthorectified L1B sounds more efficient, except for the sub-frames alignment. However it lacks this analysis for all the scenes processed. Furthermore, is the sub-frame alignment really better now ?
Figure 15 and lines 610-612 : Can you explain how you obtain the shown uncertainties and add uncertainties on your velocity estimations ?
Citation: https://doi.org/10.5194/egusphere-2023-1698-RC1 -
AC2: 'Reply on RC1', Ariane Mueting, 13 Nov 2023
Dear Pascal Lacroix,
Thank you for this detailed review of our work. The suggestions were well taken. In summary, we have performed the following step to accommodate your points:
- The manuscript has been restructured and the method section has been shortened.
- We re-arranged figures, moved figures to the supplementary material, and created new figures to better show our scientific results.
- We extended the quantitative assessment of the proposed correction method.
- We added an analysis of the magnitude of orthorectification errors, as well as a discussion about the benefits and drawbacks of both L1B and L3B data for offset tracking.
Please find our detailed response to your individual comments attached.
Sincerely,
Ariane Muetingon behalf of the authors
-
AC2: 'Reply on RC1', Ariane Mueting, 13 Nov 2023
-
RC2: 'Comment on egusphere-2023-1698', Shashank Bhushan, 16 Oct 2023
Dear authors,
Please find my comments in the attached pdf.
-
AC3: 'Reply on RC2', Ariane Mueting, 14 Nov 2023
Dear Shashank Bhushan,
Thanks a lot for this elaborate and constructive review of our work. We have carried out the following main changes:
- The manuscript has been restructured and the method section has been shortened.
- We re-arranged figures, moved figures to the supplementary material, and created new figures to better show our scientific results.
- We extended the quantitative assessment of the proposed correction method.
- We added an analysis of the magnitude of orthorectification errors, as well as a discussion about the benefits and drawbacks of both L1B and L3B data for offset tracking.
Please find our responses to each individual comment attached.
Sincerely,
Ariane Mueting
on behalf of the authors
-
AC3: 'Reply on RC2', Ariane Mueting, 14 Nov 2023
Peer review completion
Journal article(s) based on this preprint
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
548 | 346 | 45 | 939 | 73 | 33 | 33 |
- HTML: 548
- PDF: 346
- XML: 45
- Total: 939
- Supplement: 73
- BibTeX: 33
- EndNote: 33
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Bodo Bookhagen
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(31383 KB) - Metadata XML
-
Supplement
(32384 KB) - BibTeX
- EndNote
- Final revised paper