the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Accelerated lowland thermokarst development revealed by UAS photogrammetric surveys in the Stordalen mire, Abisko, Sweden
Abstract. The estimation of greenhouse gas (GHG) emissions from permafrost soils is challenging, as organic matter propensity to decompose depends on factors such as soil pH, temperature, and redox conditions. Over lowland permafrost soils, these conditions are directly related to the microtopography and evolve with physical degradation, i.e., lowland thermokarst development (i.e., a local collapse of the land surface due to ice-rich permafrost thaw). A dynamic quantification of thermokarst development – still poorly constrained – is therefore a critical prerequisite for predictive models of permafrost carbon balance in these areas. This requires high-resolution mapping, as lowland thermokarst development induces fine-scale spatial variability (~50 – 100 cm). Here we provide such a quantification, updated for the Stordalen mire in Abisko, Sweden for the Stordalen mire, Abisko, Sweden (68°21'20"N 19°02'38"E), which displays a gradient from well-drained stable palsas to inundated fens, which have undergone ground subsidence. We produced RGB orthomosaics and digital elevation models from very high resolution (10 cm) unoccupied aircraft system (UAS) photogrammetry as well as a spatially continuous map of soil electrical conductivity (EC) based on electromagnetic induction (EMI) measurements. We classified the land cover following the degradation gradient and derived palsa loss rates. Our findings confirm that topography is an essential parameter for determining the evolution of palsa degradation, enhancing the overall accuracy of the classification from 41 % to 77 %, with the addition of slope allowing the detection of the early stages of degradation. We show a clear acceleration of degradation for the period 2019 – 2021, with a decrease in palsa area of 0.9 – 1.1 %·a‑1 (% reduction per year relative to the entire mire) compared to previous estimates of ~0.2 %·a‑1 (1970 – 2000) and ~0.04 %·a‑1 (2000 – 2014). EMI data show that this degradation leads to an increase in soil moisture, which in turn likely decreases organic carbon geochemical stability and potentially increases methane emissions. With a palsa loss of 0.9 – 1.1 %·a‑1, we estimate accordingly that surface degradation at Stordalen might lead to a pool of 12 metric tons of organic carbon exposed annually for the topsoil (23 cm depth), of which ~25 % is mineral-interacting organic carbon. Likewise, average annual emissions would increase from ~ 7.1 g‑C·m‑2·a-1 in 2019 to ~ 7.3 g‑C·m‑2·a‑1 in 2021 for the entire mire, i.e., an increase of ~1.3 %·a-1. As topography changes due to lowland thermokarst are fine-scaled and thus not possible to detect from satellite images, circumpolar up-scaling assessments are challenging. By extending the monitoring we have conducted as part of this study to other lowland areas, it would be possible to assess the spatial variability of palsa degradation/thermokarst formation rates and thus improve estimates of net ecosystem carbon dynamics.
- Preprint
(5315 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
- RC1: 'Comment on egusphere-2025-3788', Anonymous Referee #1, 18 Aug 2025
-
RC2: 'Comment on egusphere-2025-3788', Anonymous Referee #2, 20 Dec 2025
General Comments:
Thomas et al. looked at palsa degradation in the Stordalen mire near Abisko, Sweden. To quantify the degradation, they derived land cover changes from UAS orthomosaics and digital surface models between i) 2019 and 2021, and ii) 2014-2022. For the classification of land cover and land cover change, they combined different image processing techniques of the input data and trained a support vector machine to achieve a maximum overall accuracy of 81.3 %.
The authors identify topography information as vital model input and observe an increase of soil moisture and inundated areas throughout the mire.
Concerning the clarity and content of the study, I have the following concerns:
Methods:
- In general, this section lacks sufficient explanation and referencing regarding the choice and implementation of algorithms or techniques, eg. GLCM, RBF, the mean permutation analysis, how the refinement of initial predictions was done using the confusion matrices. These approaches should be clearly described and supported with references. Further, you mention for the first time in the results section that several model runs were performed with different inputs. It would help readers if this aspect was already introduced and described in the methods section—perhaps including a small workflow figure to illustrate the process. I included some specific comments on this further down.
- You state that you calculated the slope using an “uncorrected” DSM, but wouldn’t the bowl effect overestimate the slope?
Results:
It is somehow difficult to understand some of your results as numbers change throughout the text or are introduced without explanation, e.g.
- in Section 3, you present a value of 24 metric tons of TOC, including 6 metric tons of MAOC for your study site, but under Section 4 you refer to 12 metric tons.
- The abstract reports an overall accuracy of the model of 41-77%, yet you achieved an OV of roughly 82 % after the refinement and 80 % without refinement but with the std. spatial filter. Did you intentionally select the land cover map from the less accurate (77%) model configuration, and if so, why? This ties into the incomplete description of model runs in the methods section, which makes it unclear which input configuration performed best. Consider describing and naming the different runs clearly and then specifying which one yielded the best result.
- I am also somewhat unclear about how the CH4 measurements (and their increase) under 3.4 are calculated.
Writing style:
- The manuscript would benefit from using the active rather than the passive voice, as this makes it easier for readers to distinguish between the work conducted by you and the data obtained from others. This distinction is especially important since both your own drone acquisitions and data from other sources are used.
- Further, I advise to ensure consistency in the terminology and in the way processed data are referred to throughout the text (e.g. F1/ F score, thermokarst vs. abrupt thaw, DEM vs. DSM ). In addition, many sentences are quiet wordy and nested, which makes the text difficult to follow at times. A general English language editing would improve readability. In some cases, I included suggestions in the specific comments below.
Specific comments:
Abstract, p. 1: The abstract in its current form is too long, the results alone occupy almost 180 words. Please consider shortening it to focus on your major outcomes.
- L 19: “propensity”, p. 1: I only know "to have a propensity to do sth." or "to have a propensity for sth." - maybe clarify with a native speaker? An alternative might be "as the decomposition of organic matter depends on ..."
- L 24: “for the Stordalen mire in Abisko, Sweden for the Stordalen mire, Abisko, Sweden”, p. 1: This is mentioned twice.
- L 26: “digital elevation models”, p. 1: Throughout your manuscript, you say "digital surface models". I would suggest to stay consistent in order to avoid confusion.
- L 27: “map”, p. 1: maps.
- L 30: “enhancing the overall accuracy of the classification from 41% to 77%”, p. 1: You should mention before that you classify the land cover using a SVM model in order to refer to it. Further, in the results you say that you achieved a maximum OA of 81.3 %.
L 45: “near-surface permafrost area”, p. 2: do you mean "area underlain by permafrost"?
L 45: “Intergovernmental Panel on Climate Change (IPCC): scenario RCP2.6”, p. 2: I think you can leave out that the RCPs were adopted by the IPCC, this is more or less common knowledge. You should, however, once state that RCP stands for Representative Concentration Pathways, e.g. for the Representative Concentration Pathways (RCPs) 2.6 and 30 – 99% for RCP 8.5.
L 50: “projected thaw”, p. 2: "projected thaw of permafrost soils"
L 51: “∼1% of anthropogenic radiative forcing”, p. 2: Can you give a value for this?
L 54: “arctic”, p. 2: Arctic
L 56: “refer to”, p. 2: Do you mean "observe"?
L 57: “abrupt thaw”, p. 2: I assume here you mean the erosion and mass movement processes triggered by abrupt thaw, such as thaw slumps?
L 59: “cycles.”, p. 2: please add references here.
L 59: “IPCC”, p. 2: As far as I know the RCPs were developed by the research community, and only adapted by the IPCC.
L 61: “below”, p. 2: "below air temperatures of ..."
L 62: “propensity”, p. 2: See comment in abstract.
L 64: “the latter of which may change drastically following certain thermokarst developments”, p. 2: Could you briefly explain why that is? Do you have a reference for this?
L 66-67, p. 2: This sentence reads a bit confusing. Turetsky states that GHG emissions across 2.5 mio km2 of abrupt-thaw affected land could have a comparable feedback as land affected by gradual thaw. I would not use abrupt thaw and thermokarst synonymously as thermokarst can be gradual and slow. Further, the number you are referencing for gradual thaw is for the entire permafrost region (18 mio km2), underlining the need to incorporate abrupt-thaw processes and their GHG emission in these models.
L 69: “increasingly negative”, p. 3: What becomes "increasingly negative"?
L 71: “The approach”, p. 3: Which approach? Are you still referring to Turetsky?
L 90-91: “Reported rates of degradation are extremely variable, i.e., range from 0.04%·a-1 to 0.7%·a-1 90 (% reduction per year) of total land cover area”, p. 3: Reported rates of degradation, quantified by changes in total land cover, range from …”
L 98: Why revisit? Is there a study preceding this one, the reader should be aware of?
L 99: “obtained from color imagery/RGB orthomosaics”, p. 4: "obtained from RGB orthomosaics"
L 103: “open water areas”, p. 4: Which open waters are you referring to here (lakes, thermokarst ponds ?) and why do you do this? Is the data for 2014-19 and 2021-22 derived from satellites? Please add 1-2 explanatory sentences.
L 105: “classification”, p. 4: This is quiet suddenly introduced. It might help to mention further up in the paragraph that you quantify palsa degradation rates from UAV imagery by using a model to predict the land cover and change thereof.
L 122: “measurements within an extent of less than 10 m2”, p. 4: Did you do these measurements or are they taken from another study?
L 124: “based on METER 125 TEROS 12 probes measurements:”, p. 4: I am unfamiliar with the term TEROS 12. I suggest referring to this device as "soil moisture sensor TEROS 12..." to clarify. Could you please also clarify, if you did these measurements yourself or if they are taken from another study? From which month and year are the measurements?
L 137: “forward”, p. 5: What was the side overlap? And where these flight parameters identical to the 2019 flight?
L 144: “evolution of the physical degradation”, p. 6: This term appears many times throughout the text and is quiet wordy. Might it be an alternative to use "land surface degradation", or to be more specifically focused on your study "palsa degradation"?
L 151: “Together, the 2019 and 2021 datasets provide complementary topographic information and adequate coverage”, p. 6: This reads like the UAS covered different areas of the mire in 2019 than in 2021.
L 154 - L 160, p. 6: This paragraph reads a bit confusing. From the table A 1, I would assume that you used RGB images of the same mire originally acquired for different studies to complement your time series. However, here you refer to a dataset that delivers estimates of degradation trends. Could you please clarify what you mean?
L 162: “The various pre-processing operations carried out on the data are detailed below:”, p. 6: I suggest to clearly state that this is the pre-processing of the UAS data to avoid confusion. Personally, I prefer full sentences, explaining processing steps as they are more reader-friendly. If you prefer bullet points, please make sure to add a little more detail on the individual steps, see comments below.
L 165: “Co-registration”, p. 6: Which co-registration algorithm did you use? And was this also done for the complementary RGB images (2014-19 and 2022)
L 168: “at”, p. 6: to
L 169- 170: “Removal of the bowl-shape effect from elevation data (DSM), to eliminate systematic distortions or artifacts caused by sensor or data processing errors, ensuring accurate inter-annual comparison (Fig. A. 21)”, p. 6: Shouldn't this step come before deriving the slope from the DSMs?
L 190: “Briefly, the classification process involves”, p. 7: see comment on data pre-processing
L 192: “times the standard deviation”, p. 7: Could you provide a reference for this approach or briefly explain why you did this?
L 195-199, p. 8: So are these mean and standard deviation calculations applied on a stack of RGB, DSM, and slope ("original bands": or on the individual maps? Please state, that you did this to normalize the data. This will make it more accessible to readers who are not that familiar with image processing pipelines.
L 201: “The calculated texture attributes include entropy, angular second momentum, contrast, homogeneity and the standard deviation of the GLCM, all applied within a 21 × 21 moving window.”, p. 8: What are you calculating these for?
L 206: “radial basis function (RBF) as kernel and the gamma parameter set as automatic”, p. 8: Why did you set them like that?
L 208: “data distributions for the training areas are shown in Fig. A. 3.”, p. 8: Here, I would be interested in some more explanation. What does that figure tell me? Is this already a result or are you simply underlining that you sampled your training points equally across classes and bands? For 2019, it looks like your training areas cover very similar pixel value ranges (ca. 130-160: for RGB bands and in 2021 this looks quiet heterogeneous. Do you discuss this somewhere?
L 210: “Refinement of the initial predictions.”, p. 8: This is interesting. Can you recall for how many pixels this was the case?
L 212: “the evolution of the”, p. 8: I suggest "representing degradation between...".
L 220: “increasing window sizes”, p. 8: starting from?
L 228-229: “To identify representative classes for the evolution of the degradation between 2019 and 2021, we therefore scanned the different data for the two years of measurement and observed the changes in morphology.”, p. 9: Could you please clarify this sentence? Do you mean "representative examples"?
L 24, equation 1:, p. 9: The overall accuracy is calculated dividing the number of TPs and TNs by the total number of samples. If you only used TPs, you underestimated your model performance.
L 245: “For the model based on the 2014 – 2022 UAS time series (see Fig. 2a for the spatial extent:,”, p. 10: I suggest moving this paragraph before 2.5 Performance evaluation of classification models as this still belongs to Data Processing and Classification. Further, you now describe your processing and classification steps in full sentences instead of bullet points. I would suggest to decide for one option and be consistent. Is the model SVM based again?
L 246: “5 cm × 5 cm”, p. 10: Why did you not resample them to the same 10x10 resolution?
L 250: “7 × 7 moving windows,”, p. 10: To normalize the data?
L 250: “GLCM additional texture features”, p. 10: see comment before: why are you calculating these?
L 254: “confusion matrices to correct for misclassification bias in the output maps”, p. 10: Please explain how you do this. If your CM, for example, shows that 20% of your "stable palsa" pixels get mislabeled as "other", what do you do in order to correct this for the entire study site?
L 260: “To investigate these electrical properties”, p. 11: This reads like you are referring to the factors you just described as electrical properties. Maybe rephrase it to "To measure the electrical conductivity..."?
L 261: “non-invasive nature”, p. 11: Can you describe the EMI in 1-2 sentences?
L 262: “large areas”, p. 11: Can you quantify how large? Comparable sizes of 14 ha or even larger ones?
L 267: “the instrument response is linearly related to soil EC”, p. 11: high values translating to high EC? What does this mean for the factors you described above?
L 267: “For our study site, we verified that the LIN hypothesis”, p. 11: Maybe you could move this sentence to the result section.
L 270: “GNSS”, p. 11: Is this built into the EMI or is there an additional device mounted on the sledge taking these measurements.
L 272: “using a combined nugget and exponential model with a maximum prediction distance of 8 m.”, p. 11: Here, an explanatory sentence would be nice - or a reference.
L 279: “horizons”, p. 11: soil horizons
L 281: “consists in”, p. 11: "is described/ defined by"
L 282: “The results consist of a TOC or MAOC stock that is made vulnerable annually as a result of palsa degradation. The actual timing of OC loss remains unknown.”, p. 11: Is this already a result or are you pointing to the consequences? "made vulnerable" - "exposed to"?
L 292-293: “Several tests have been conducted to determine which parameters provide the best classification results for palsa degradation between 2019 and 2021.”, p. 12: Which tests were those, are you explaining them somewhere? In the methods you currently only address the standard metrics (OA, Prec., Recall, F1) - were there more?
L 297: “undergoing or under recent degradation”, p. 12: Most of the time you write "undergoing degradation" instead of "undergoing or under recent degradation" in your manuscript. I would keep it this simple, to not get too wordy.
L 300: “Fig. C. 1a”, p. 12: How is this mean permutation importance calculated? Is this one of the tests you address above? Please explain this in the method section.
L 300: “relative elevation and slope to perform a land cover classification over the Stordalen catchment was also done by Siewert (2018) and resulted in an overall accuracy of 74%.”, p. 12: So, including the diff in relative elevation improved the classification result compared to Siewert et al.?
L 306: “79%.”, p. 12: introducing which of the three window sizes achieved this value? Is it possible to include the confusion matrices for each run - i: RGB, ii: RGB + relative elevation, iii: - in the appendix? It is still not clear to me how many model runs there were in total and if the mean spatial filters were introduced one by one or at the same time. I would suggest to include a workflow or detailed description under 2.4.
L 307: “Adding standard deviation spatial filters improves the overall accuracy to 80%, as does the standard deviation of the GLCM,”, p. 12: See comment above. It seems this was done one by one.
L 313-314: “Then, initial predictions were refined using elevation differences and model scores to ensure alignment with domain knowledge (see section 2.4:.”, p. 12: In the methods you refer to "highest decision score from the set of allowed classes". Could you shortly elaborate on that? There is no information on how many pixels in the study area were refined like this. Do you have a table of these decision scores? It is hard to understand this without any numbers.
L 316: “evolution”, p. 13: improvement?
L 326: “Tab. C. 1; Tab. C. 2:”, p. 13: This is a little confusing: In C1 and C2 you present the CMs for the model run with all (?) 55 bands as: (i) 11 bands with original data (3 spectral bands, relative elevation and slope for 535 the years 2019 and 2021 as well as the difference in relative elevation between 2019 and 2021: along with (ii) 3 × 11 bands with spatial filters (mean & standard deviation: over windows of increasing size, i.e. 3 × 3, 5 × 5 and 7 × 7, and finally(iii) the 11 bands from the texture attribute ‘homogeneity’. But in C 1b where you show the performance, the 55 bands also include the refinement and noise filter. At the same time, all of the latter model runs have an input of 55 bands. So, some bands were substituted by others? I suggest, when you add these clarifications to the method section to name the model runs more clearly and establish which bands were used for which run.
L 330-333, p. 13: See comment for Tables C1 and C2.
L 348: “2014 WorldView 2 satellite”, p. 14: Where was the 2000 dataset from?
L 351: “0.9%·a-1 to 1.1%·a-1”, p. 14: Is this range due to the different model outputs? Could you briefly state, which values result from which model run?
L 352: “previous periods”, p. 14: "the 1970-2000 period"
L 371: “Recent modeling studies besides”, p. 15: "Besides,recent modeling studies ..."
L 378: “a large number of study sites”, p. 15: "a large number of study sites of comparable sizes"
L 379: “humidity”, p. 15: This is misleading. You don't measure humidity, but quantify the proportion of open water and inundated areas to total land cover and - in the case of the EC data - higher soil moisture.
L 380: “model from the broader temporal view”, p. 15: This reads like the model used for the 2014-2022 data was a different one.
L 380: “we observe a trend towards an increase in the area”, p. 15: "we observe an increase"
L 386-387: “Furthermore, the model from the 2014 – 2022 UAS time series does not use terrain morphology data (relative elevation and slope: for classification.”, p. 15: Okay, here it is clearer, that the model seems to be the same (SVM), but the input varied. Please, clarify this in the method section.
L 388: “quality indicators (Fig. 5a) are weaker than for the biennial model (Tab. 2)”, p. 15: Please be consistent and precise with your wording, you are comparing the F scores here, not all quality indicators.
L 401-402: “enabling the estimation of soil EC from the quadphase component of the measured field, as the LIN assumption is met (McNeill, 1980:.”, p. 17: Could you explain this in 1-2 sentences?
L 407-408: “vegetation types, as shown in the RGB orthophoto”, p. 17: Can you repeat them here?
L 408: “more developed vegetation”, p. 17: What do you mean by "more developed"? Are the plants in high EC areas considerably higher or lusher?
L 410: “R2 = 51%”, p. 17: Could you explain this? In my experience 51 % is a moderate correlation, not a clear linear one.
L 412: “p-values from Kruskal-Wallis test < 10−3”, p. 17: could you list these p-values here or add them to the figure?
L 420: “Figure 6:”, p. 18: What is the white space on the left of b,c,d ? The data points for class "other" in subfigure e) are hard to see. Could you use a different color for this class?
L 423-424: “Boxplots of the evolution of electrical conductivity as a function of the class.”, p. 18: “Boxplots of the EC per land cover class”
L 427: “land-cover”, p. 19: land cover
L 432: “23 cm”, p. 19: Where does this number come from? Is this derived from your DEM?
L 434-435: “They consist of TOC or MAOC stocks that are made vulnerable annually as a result of palsa degradation and represent first order estimates.”, p. 19: Could you please check the grammar of this sentence again? What is "they"?
L 445: “~ 7.1 g-C·m-2·a-1 in 2019 to ~ 7.3 g-C·m-2·a-1”, p. 19: Could you briefly say where these numbers are coming from?
L 450: “show much lower or higher results than”, p. 19: L450 : "deviate from"
L 456: “evolution”, p. 19: change?
L 460: “arctic”, p. 20: "Arctic"
L 473-474: “We have conducted an evaluation of palsa degradation through time based on photogrammetric surveys providing access to RGB imagery and topography”, p. 20: e.g. "We quantified palsa degradation using RGB imagery and topography data from UAS surveys, ...”
L 487-488: “pool of 12 metric tons of organic carbon”, p. 20: under 3.4 you state "24 metric tons of TOC, including 6 metric tons of MAOC". How did you get to this number?
L 488: “~25% is mineral-interacting organic carbon”, p. 20: could you please explain briefly, how you arrived at this number?
L 489: “~ 7.1 g-C·m-2·a-1 in 2019 to ~ 7.3 g-C·m-2·a-1 in 2021”, p. 21: see comment L 445.
Figure A. 1: “evolution of active layer depth along the gradient; (c) evolution of the volumetric water content along the gradient”, p. 22: See comment under 2.1: Did you do these measurements? Could you briefly describe?
“Table A. 1:”, p. 23: Do Robota Triton XL and Sensefly Ebee also use RTK?
“Figure A. 3”, p. 26: Could you please also label the x-axes of subfigures a-f and normalize them to a uniform range? This would make interpretation a little easier for the reader.
Citation: https://doi.org/10.5194/egusphere-2025-3788-RC2
Data sets
Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2014 M. Palace et al. https://doi.org/10.7910/DVN/SJKV4T
Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2015 M. Palace et al. https://doi.org/10.7910/DVN/NUXE30
Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2016 M. Palace et al. https://doi.org/10.7910/DVN/IAXSRD
Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2017 J. DelGreco et al. https://doi.org/10.7910/DVN/NZWLHE
Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2018 M. Palace et al. https://doi.org/10.7910/DVN/2JXWVW
UAV - RGB orthomosaic from Stordalen, 2019-08-16 Abisko Scientific Research Station, Swedish Infrastructure for Ecosystem Science (SITES) https://hdl.handle.net/11676.1/U4o8KrPkEiKw5RsfiCJZeEgX
RGB orthomosaic, digital surface model and slope over Stordalen Mire, Northern Sweden, 2021 M. Thomas et al. https://doi.org/10.14428/DVN/MGNYNN
Unmanned Aerial Imagery over Stordalen Mire, Northern Sweden, 2022 M. Palace et al. https://doi.org/10.7910/DVN/G9Y8WC
Model code and software
Accelerated lowland thermokarst development revealed by UAS photogrammetric surveys in the Stordalen mire, Abisko, Sweden M. Thomas et al. https://doi.org/10.14428/DVN/SX6TYV
Viewed
| HTML | XML | Total | BibTeX | EndNote | |
|---|---|---|---|---|---|
| 2,393 | 84 | 27 | 2,504 | 73 | 75 |
- HTML: 2,393
- PDF: 84
- XML: 27
- Total: 2,504
- BibTeX: 73
- EndNote: 75
Viewed (geographical distribution)
| Country | # | Views | % |
|---|
| Total: | 0 |
| HTML: | 0 |
| PDF: | 0 |
| XML: | 0 |
- 1
The authors used UAS surveys to quantify rates of palsa degradation at the Stordalen mire in Sweden from 2019 to 2021, within the context of longer-term change from almost annual UAS surveys from 2014 to 2022. EMI measurements were also collected to characterize the soil properties in stable, degrading, and degraded sections of a study transect.
While the study is interesting, I worry that the findings, as they are currently presented, are not particularly novel. The authors find that topography is important for identifying palsa degradation, that palsa degradation leads to more open water and increased soil moisture, and that palsa degradation rates have increased in recent years. These findings are useful, but they have already been demonstrated in other studies. What is potentially novel, however, is the updated estimate of emissions from Stordalen, and the integration of EMI and UAS field methods. I encourage the authors to really highlight these elements in this manuscript.
GENERAL COMMENTS
This manuscript could be greatly improved by separating the Results and Discussion sections. As it currently reads, this section is quite long, and it contains a mix of reporting of results, reminders of methodology, and discussion and links to literature. I would strongly recommend separating out the Results and Discussion sections and focusing the Discussion primarily on the rate of palsa degradation compared to the literature, the implications for organic carbon stability and greenhouse gas emissions, and the scaling up of the results. A stronger Discussion would better justify the suitability of this manuscript in a multi-disciplinary journal like The Cryosphere.
The EMI results are interesting, but they currently feel a bit out of place with the rest of the study that is more focused on UAS-based rates of change. As above, I would suggest separating out the Results and Discussion and focusing in on the hydrological changes that were identified between stable, degrading, and degraded areas from the EMI surveys and what this means for the permafrost carbon feedback and greenhouse gas emissions. Currently, the manuscript has a separate section for the EMI results that discusses an increase in open water and soil moisture, but it would be more effective if the EMI results were properly integrated with other elements of the study, such as elaborating on what increases in ponds and soil moisture would mean for carbon stocks, emissions, etc.
I find it a bit difficult to follow the flow of the study, as it is currently written. I think it would be worth considering re-structuring the Methods section to first describe the data processing for the 2019-2021 model, then the EMI work, then the data processing for the 2014-2022 dataset. This would help to highlight the novel data collection/work (2021 UAS flight, EMI survey, etc.) within the context of a longer study period (2014-2022).
Overall, the figures are nice and the authors have taken care to ensure that the colour schemes are accessible. The text formatting in some of the tables may need to be reviewed, as there are some terms that are capitalized and others that are not.
SPECIFIC COMMENTS
INTRODUCTION
P2 L53, Remove “excess” from this sentence.
P2 L57, There is a new paper that has just come out on the use of the term “abrupt thaw” by Webb et al. 2025 that can be used to replace Turetsky et al. 2020.
P2-3 L53-78, This background information on thermokarst landform types and development is interesting, but it takes away from the purpose of the study itself, which is to quantify palsa degradation in the Stordalen mire using UAS surveys. The Introduction could be improved by introducing palsas as peatland permafrost landforms, discussing the importance of peatland permafrost landscapes for permafrost carbon feedbacks, and then diving into the benefits of UAS imagery over satellite imagery and aerial photographs.
P3 L90-92, I agree that reported rates of degradation are extremely variable, but it would be helpful to highlight to the reader what area or approximate time period these studies are from. In P3 L95-97, the authors state that there is accelerated degradation in more recent years, but it is difficult to understand this relative to the previous statement that does not provide a time reference/study period.
P3 L93-94, Wang et al. 2024, Verdonen et al. 2023, Zuidhoff and Kolstrup 2000, Thie 1974, Payette et al. 2004 are some other studies that also present lateral palsa degradation rates. These may be helpful for further contextualizing the results of this study on P15, L359-363.
P4 L98, What is meant here by “revisit”? This is the first mention of the Stordalen mire, and the authors do not provide examples of previous studies of degradation at the Stordalen mire, other than to say that 55% of Sweden’s largest palsa peatlands are currently subsiding in the previous paragraph. Please clarify.
P4 L103-105, Please provide what years the EMI surveys were conducted.
METHODS
P4 L109, Section 2.1 is lacking information on the climatic conditions over the study period, from 2019-2021 for the primary part of the study, and from 2014 to 2022 for the additional UAS data that was used. This would be critical for contextualizing palsa degradation.
P4 L113, Given that the study is primarily conducted from 2019 to 2021, or even from 2014 to 2022, is there a more recent value for MAAT since 2006? Please update.
P4 L120, Zuidhoff and Kolstrup 2005 and Railton and Sparling 1973 also discuss vegetation associated with different palsa stages.
P4 L123, Is there any available information, either from this study or from previous studies, on the height of the palsas and the thickness of the permafrost at this site? It is helpful to know that the active layer thickness varies from 50 cm in stable areas to >200 cm in degraded areas, but is it possible that a talik has formed and that there is still permafrost present at depth?
P5 L134, The authors state here that the field campaign took place between September 14 and October 10, 2021, but that the UAS flight took place on September 17, 2021. What else occurred during this time period? When were the EMI surveys conducted?
P5 L137, Thanks for providing the forward overlap. What was the side overlap?
P6 L134, Please specify that this is RGB imagery collected from UAS. While this is clear when looking at Table A 1, this should be included in the main text as well.
P6 L161, I think it would be best to present this information in paragraph form and to explain each of the steps and what datasets were used in each step. For example, stating that the slopes were extracted from DSMs where applicable is quite vague, and the reader is likely unsure of what is and what isn’t applicable. Is this trying to convey that slopes were extracted from DSMs for 2019 and 2021, but not for the other years? And how was the area of interest extracted? Was it clipped?
P7 Figure 2, Remove the extra “t” in “literature” in the caption for panel a. The grey and yellow bounding boxes are very similar in colour and are a bit difficult to differentiate.
P7 L183-184, Are there any historical aerial photographs or satellite images that can help to confirm that permafrost was not present in these locations for several decades?
P7 L189, Are there any locations at all where permafrost aggradation and palsa expansion occurred? Having a section that describes climatic conditions from 2014 to 2022 as suggested above would be helpful for this.
P7-8 L190-222, As with P6, I think that it would be best to present much of this information in paragraph form. This could be supported by a figure or table that explains the process more visually and that possibly integrates information from Table A2 and Figure A3.
P11 L257, Hypotheses are usually presented in the Introduction, not the Methods section. Please move this up to the Introduction and provide more information on how the authors expect the electrical properties of the soil to vary along the degradation gradient. Should the EC be higher or lower according to the factors presented (soil texture, clay content, water content, salinity, organic matter type, organic matter proportion, soil structure, soil density, soil temperature, and most importantly, permafrost presence/absence!). Instead, in this section, please focus on describing how the EMI surveys were positioned, how long they were, etc. It is helpful to know that there were 1083 points, but the reader is not informed of how far the points are from each other, whether they are all along the same line, etc.
P11 L271, Is there a reference or any more information available for this custom-made acquisition program?
RESULTS AND DISCUSSION
P13 Figure 4, This figure is very effective, particularly panel b! I would recommend changing the light blue colour of the “degraded areas” in panel a to another colour, as this looks like water at first glance.
P14 L344-355, This is the first instance where the reader can really come to understand the authors’ “revisit” of palsa degradation rates in the Stordalen mire. These past studies should be first presented in Section 2.1 so that the reader is able to keep this information in mind as they read through the results of this study.
P15 L379, The section entitled “Palsa degradation means higher levels of humidity” does not really discuss humidity levels at all. It may be more appropriate to instead name the section something like “Palsa degradation leads to increases in soil moisture and open water”.
P16 Figure 5, I understand that data could not be collected in 2020, so the corresponding bar has a dashed outline. But if data could not be collected in 2020, how is there a bar and a value associated with this year at all?
P15 L385-386, The authors state here and show in Figure 2 that the processing extents for the 2014-2022 comparison and the 2019-2021 comparison are not the same. In addition to the work that has been done, is it possible to clip the results of the 2019-2021 comparison to the 2014-2022 comparison extent, so that the authors can additionally present results that are directly comparable?
CONCLUSION
P20 L481-485, This is a helpful summary of findings that integrates the EMI and 2014-2022 work well.