the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea
Abstract. A new CESM/MOM6 ocean-only regional 1/12° configuration of the Caribbean Sea is presented and validated. The model configuration was developed as a response to the rising need of high-resolution models for climate impact applications. The configuration is validated for the period covering 2000–2020 against ocean reanalysis and a suite of observation-based datasets. Particular emphasis is paid to the configuration's ability to represent the dynamical regime and properties of the region across sub-seasonal, seasonal and inter-annual timescales. Near-surface fields of temperature, salinity and sea-surface height are well represented. In particular the seasonal cycle of sea-surface salinity and the spatial pattern of the low salinity associated with the Amazon and Orinoco river plumes is well captured. Surface speeds compare favorably against reanalysis and show the mean flows within the Caribbean Sea are well represented. We performed a case study to examine the origin of waters arriving to the Virgin Islands Basin and show the model reproduces known pathways and timing for river plume waters intruding the region. The seasonal cycle of the mixed layer depth is also well represented with biases of less than 3 m when comparing to ocean reanalysis. The vertical structure and stratification across the water column is represented favorably against ship-based observations with the largest simulated biases in the near-surface water mass and the sub-surface salinity maximum associated with the sub-tropical underwater mass. The temperature and salinity variability of the vertical structure is well represented in the model solution. We show that mean ocean mass transports across the multiple passages in the eastern Caribbean Sea compare favorably to observation-based estimates, but the model exhibits smaller variability compared to ocean reanalysis transport estimates. Furthermore, a brief comparison against a 1° CESM global ocean configuration shows that the higher resolution regional model improves significant biases in sea-surface salinity and mixed layer depth in particular. Additionally, the regional model better represents important features and variability within the Caribbean Sea compared to the 1° model. Overall, the regional model reproduces to a good degree the processes within the Caribbean Sea and opens the possibility of regional ocean climate studies in support of decision making within CESM.
- Preprint
(16036 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2024-1378', Anonymous Referee #1, 02 Jul 2024
Seijo-Ellis et al. presented a 1/12-degree resolution regional ocean model, referred to as CARIB12, that encompasses the entire Caribbean Sea and Gulf of Mexico, built up in a recently developed CESM2-MOM6 modeling system. The authors provided an extended validation of CARIB12 against GLORYS reanalysis fields, as well as multiple observational datasets, showing that the model was able to simulate reasonably well long-term and seasonal averaged patterns in temperature, salinity, mixed layer depth, sea surface height, and other variables across the Caribbean Sea. Not surprisingly, they also showed that CARIB12 compares much better to observations than a 1-degree resolution CESM-POP model. This is a valuable modeling effort for the non-well studied Caribbean Sea, a region that contains critical Atlantic circulation pathways and hosts a rich marine biodiversity.
I have listed below four points that should be address before I can recommend publication:
1. Yucatan transport: You estimated a Yucatan transport of 20.6 Sv. It is important to include that value in Table 4 and compare with more recent estimated for the region. Although you cited a couple of studies that seems to be in good agreement with your model result (Sheinbaum et al. [2002] and Candela et al. [2003] with 23.08 and 23.06 Sv, respectively), a more recent study by Candela et al. (2019; DOI: 10.1175/JPO-D-18-0189.1) using an extended time series reported a Yucatan transport of about 27.6 Sv. The observed value is somewhat below the transport you can derive from GLORYS, which is about 29 Sv. This implies that CARIB12 underestimated the Yucatan transport by 25% or more. You should recognize that model bias, and maybe provide some discussion about what could be the reason of this underestimation.
2. Monthly climatologies for specific subregions: It could be interesting to compare monthly climatological patterns of salinity and temperature in specific subregions (either surface fields or vertical profiles). You calculated a monthly climatology in Figure 13, but that is an average for the entire interest region, which I think is not the right approach. Averaging over this large area could mask subregional biases, and you are also not discriminating important spatial variability that can be worth to describe for the region.
3. Interannual variability. Figures 14 and 15g-l revealed that the interannual transport variability is not well reproduced by the model (I disagree with the statement in lines 365-366 “While the mean inter-annual flows are well represented in CARIB12”). Does the Caribbean Current have a chaotic behavior? This should be further discussed. In addition, since the ability of the model to reproduce interannual variability is critical for the analysis of historical patterns, I wonder to what degree the model was able to simulate realistic interannual variability in temperature and salinity. Is it possible that you generate monthly time series of these variables for specific subregions and compare with observations or GLORYS?
4. Figure quality: The quality of the figure must be improved. You are not using any map projection to display the spatial patterns, and I think you should. I would also consider including some discretization in the colorbar to better discriminate spatial features. You could also evaluate merging several figures, like 3 and 4, 6 and 7, and 9 and 10. That may help to compare better the winter to summer changes in SSS, EKE, and MLD.
Additional suggestion:
Velocity patterns: In addition to the mean speed, I suggest you compare the mean velocity fields (u,v). That would add further insights about what circulation biases the model has. For example, see Figure 4 in Liu et al. (2015; http://dx.doi.org/10.1016/j.jmarsys.2015.01.007).
Section 3.15. I am not sure if this section is worth to include in the manuscript. This is not model validation and Figure 16 is little bit hard to interpret. Unless you have actual observations to compare the simulated trajectories, I would remove this section.
MOM6-NWA12: I wonder what difference in terms of configuration (beyond the model domain extension) has CARIB12 respect to the MOM6-NWA12 configured by Ross et al. (2024). Maybe, it could be worth to mention something about it.
Citation: https://doi.org/10.5194/egusphere-2024-1378-RC1 -
AC2: 'Reply on RC1', Giovanni Seijo-Ellis, 21 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1378/egusphere-2024-1378-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Giovanni Seijo-Ellis, 21 Aug 2024
-
RC2: 'Comment on egusphere-2024-1378', Anonymous Referee #2, 02 Jul 2024
Review of “CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea” by Seijo-Ellis et al.
This manuscript describes and validates an ocean only CESM/MOM6 configuration of the Caribbean Sea. I think this work has the seeds for a work publishable in GMD but it needs some improvements before it can be accepted for publication. The validation is very basic and it could be substantially improved. Since this is a regional model, one would expect it to perform better than global models given the added physics and the parameter tuning to better fit the regional dynamics. Increased resolution is not a possibility here since the chosen resolution is the same as the model used to force at the boundaries, a choice that is not justified in the text. It seems therefore that one of the benefits in terms of physics is that the model has tides, which are not resolved in the global products, and some extra parameterizations such as that associated with (not resolved) submesoscale variability (just that associated with Mixed Layer Instability). Yet, none of these aspects are discussed in the manuscript. There is no validation of the model tides, and the effect of such variability on the seasonal and subseasonal variability is not provided. The same for the MLI parameterization. In addition, the validation of the model just focused on the seasonal variability, by comparing means and monthly means. This is a very low benchmark given how good models and forcing fields are at present time. There is a need for validation at subseasonal scales. In addition, some information seems unnecessary, such as the comparison with CESM-POP and the drifter section, while some other information is lacking, such as a more robust validation for the sub-seasonal component. A lot of weight is given to GLORYS12 as "the truth". The model has tides, which I'll assume makes the simulation much more realistic and differences are expected, so comparing with GLORYS12 might be misleading. Finally, given that a lot of the emphasis is on the Amazon and Orinoco river plumes, more discussion about this is needed. How realistic is GloFAS? how its seasonal cycle compares with the seasonal cycle derived from observations (such as the Dai dataset)? What is the effect of mixing by tides and the mixing parameterizations on the river plumes?
More specific details are provided next.
Section 2.1 Model Description
The model resolution is the same as GLORYS12, why not higher resolution?
In “layer z∗” what "layer z*" means? is this a hybrid coordinate?
In “baroclinic time step of 800 s”, Table 1 says it is 900 s
In “The final configuration was largely determined by the mean flows into the CS across the multiple passages between the Caribbean Islands.” It seems that the authors "optimized" the model configuration to better represent the inflow at the passages. Given the spatial domain of the model, why is this small area chosen as the benchmark? For instance, in the discussion about surface salinity several deficiencies are acknowledged regarding representation of the river plumes, and in the discussion about surface currents deficiencies in reproducing the mesoscale in the eastern CS is also acknowledged.
“The parameterization of Fox-Kemper et al. (2011) is implemented for the re-stratification of the mixed layer by sub-mesoscale eddies with a front length scale of 1500m.”
in “1500m” a space is needed here and in many other places in the text and tables
In “Table 2.” should be Table 1
Section 2.1.1 Initial and Open Boundary Conditions
In “The open boundary conditions are specified daily”, Table 1 says monthly means are used in the nudging layer, please clarify
In “Ten tidal constituents are specified at the boundaries.”. Given the huge spatial extent of this “regional” application, the direct astronomical forcing might be an important contributor to the tides. Why is the tidal potential force not included? There is a need for a robust validation for the tides in the model. How well are the harmonics at coastal stations and along the altimeter tracks resproduced?
In “Nudging layers for temperature, salinity and velocities are applied to minimize noise at the boundaries that may contaminate the interior” … “The layers are based on mean monthly fields from GLORYS12.” I am a bit confused here. The GLORYS12 fields are specified as daily means at the boundaries, but then nudged to monthly means within the nudging area delineated by the white dashed line in Fig 1, using a nudging strength of 0.3 days? I suspect that this very strong nudging to monthly means will kill most of the daily variability coming in from the boundary forcing. Please clarify.
In “JRA55-do” why is this forcing used? Has this product been favorably validated for the area when comparing with ERA5 or NCEP?
Section 2.2 Validation datasets
*The paper focus mostly on reproducing the seasonal variability. That is a very low benchmark.
In “Optimum Interpolation SST”. This is a ¼ deg resolution. There are many other higher resolution datasets. Any particular reason for using this in the validation?
Section 3 Results
In “Our focus is on time averaged fields, with the average computed for the full time series or specific seasons.” Monthly means comparison is not a very challenging metric. Why not provide a more ambitious goal such as spatial maps of correlations in time (and rms) using daily values, which are available for satellite analyses?
Section 3.1.1 Temperature and salinity
The figures need some work. The latitude and longitudes in panels b-e need to be included here and in other figures. I know they can be read from the left panel, but that is an unnecessary burden for the reader.
In “small cold bias of -0.15◦C” (Seijo-Ellis et al., 2024, p. 7) is this the spatial mean? please specify
“Notably, GLORYS12 appears to have a freshwater source in the region of the Dominican Republic and Puerto Rico resulting in positive biases within the CS”. Here and in many parts in the validation section, I wonder if the differences could be due to the absence of tides in GLORYS12? There is a strong need to validate the tides and show its effect on the model performance. After all, this is one of the main benefits of a regional application when comparing with global models (besides the increased resolution, which is not the case here).
In “the spread and extent of the plume waters is similar between CARIB12 and GLORYS12, but is much smaller in the gridded observations (particularly during the winter, Figure 3).” Could you comment on the spatial resolution and limitations of the SSS analysis? Is this a blend of SMOS and SMAP, plus available in-situ? If that is the case, there are well known limitations close to the coast, specially in highly populated areas due to radio frequency interference. Moreover, these products are usually provided with a map of the associated error. How big is the analysis error close to the coast? Maybe the differences you are discussing here fall within the expected error bounds.
In “In CARIB12, runoff is not distributed vertically in the ocean but rather spread horizontally across a maximum radius of 600 km with an e-fold decay scale of 200 km at the shallowest layer.” This is certainly a very strong limitation. Why is the radius is so big? Did the authors try smaller (i.e., more localized) values? why not? How sensitive is the spread of the river plumes to the vertical mixing parameterization used? This needs to be specified in the text.
Section 3.1.2 Surface currents speed, eddy kinetic energy and sea surface height
In “The biases are larger when comparing CARIB12 to the altimetry-based GOGSSH (mean bias of 3.8 cm/s), but it is worth noting the difference in spatial resolution and that the altimetry derived velocities are purely geostrophic which may not be a good approximation in this region (surface Ekman currents can reach 50 cm/s in parts of the Caribbean Basin, Andrade-Amaya, 2000)” Is this really true???? The authors do not specify exactly what product from Copernicus they are using, but the only global product I could find specifies: "The total velocity fields are obtained by combining CMEMS satellite Geostrophic surface currents and modelled Ekman currents at the surface and 15m depth (using ERA5 wind stress in REP and ERA5* in NRT)".
The product I am referring to is: https://data.marine.copernicus.eu/product/MULTIOBS_GLO_PHY_MYNRT_015_003/description
Please clarify and elaborate on this point in the manuscript.In “Caribbean Sea through the Colombian Basin” Could you provide a lat/lon range for this?
In “Figure 8 shows the winter and summer mean SSH for CARIB12 and GLORYS12”, was the mean SSH (a constant) within the area removed? please specify
In “The meridional extent of this feature extends further off-shore in CARIB12 indicating a wider Caribbean Current in CARIB12 compared to GLORYS12.”, please add the mean currents in both panels to show that is actually the case ...
A comparison for the SSH is made just for the Mean Dynamic Topography (MDT) and the seasonal values .....the authors could consider using the along track data re-processed for coastal applications
https://www.aviso.altimetry.fr/en/data/products/sea-surface-height-products/regional/x-track-sla/gulf-of-mexico-caribbean-sea.html
or
https://www.aviso.altimetry.fr/en/data/products/sea-surface-height-products/global/altimetry-innovative-coastal-approach-product-alticap.html
How are the maps of bias, rms and correlation along the tracks??
In “using the ∆0.03kg/m3 density criterion with respect to surface values in CARIB12 and GLORYS12, and with respect to a depth of 10 m in the deBoyer Montégut (2023) climatology.”, why not use the same definition for both cases (10 m reference)?In “overall deeper MLD in CARIB12”, could this be to the effect of tides on mixing?
In “small biases described for salinity”, I won't say the biases are small. Fig 4 suggests biases of up to 2 psu!
In “an overall positive salinity bias (Figures 3 and 4) corresponds to saltier waters in the near surface which leads to weaker vertical stratification (Figure 12) in the upper 0-100 m resulting in a deeper mixed layer particularly during the winter” I think vertical mixing by tides is another very possible candidate.
In “one must exercise caution in this comparison as the CARIB12 MLD is calculated using the ∆0.03kg/m3 criterion referenced to the shallowest layer in the model (1.25 m), while the deBoyer Montégut (2023) dataset is calculated referenced to 10 m 285 depth”, so, why not use 10 m instead?
Section 3.3 Vertical structure and water mass properties
This section needs considerable work. The authors mention that they "identified WOCE lines within the region of interest and specific cruises within the time frame of the simulation (2000-2020)". But here just the WOCE lines are discussed. In addition, the validation is reduced to a visual comparison of the PDFs as a function of depth (and the difference of the PDFs). Given the latitudinal extent of the WOCE lines I suppose there is substantial spatial variability from north to south, yet none of that is considered in the validation. And why not include all the Argo profiles available for the area in the validation of the model. A much more robust validation will be to provide correlation, bias and rms error as a function of depth for all the available profiles (WOCE lines, Argo floats, cruises). The same could be done along the WOCE transects to show where in latitude the deficiencies are.
In “observations along WOCE lines A20 and A22 within the region of interest”, there is a need for more information about this. How many occupations of the two lines between 2000-2020? are these transects during the summer, winter, both? ... If more than one season, does it make sense to combine them all in the same plot?
In “some of the largest differences occurring in the shallower Caribbean surface waters (Figure 11d-e)”, how is the reader supposed to know this mismatch is for the shallow areas? It will be better to show a comparison between the model and obs along one of the transects. Or even better, if there are several occupation of a transects, show a transect of the mean difference
In “Additional biases are noted around the salinity maximum”, again, how is the reader supposed to see that in figure 11?. A spatial map would be much more informative.
In “Figure 13” , the caption narrative is very confusing, not just for this figure. Please revise, and then ask a colleague not familiar with the work if it makes sense. Indicate the state variable in the title, such as CARIB12 temperature, GLORYS12 salinity, etc.
In “These results highlight CARIB12’s ability to correctly represent temperature and salinity variability in the CS as seasonal scales”, why these results show CARIB12 is better??? This is a model to model comparison, and in fact GLORYS12 has in-situ profiles of temperature and salinity assimilated, so in what sense CARIB12 is better????
In “We note that while transports based on CARIB12 and GLORYS12 are a time mean during 2000-2020, observational estimates are available only for shorter periods of time.” … “The mean transport across the Windward Passage between Cuba and Hispaniola is 1.91 Sv into the Caribbean Sea which is lower than that in GLORYS12 (2.8 Sv) and observational based estimates (3.8/3.6 Sv) by Smith et al. (2007)”: what is the minimal cross section area along this passage in the raw bathymetry vs that in INDO12 and GLORYS12? And also,how was the transport in Smith computed? Check in the model simulation for the assumptions made in Smith, such as extrapolation to the bottom and boundaries? Based on the model simulations, are these valid assumptions? What happens with the transport estimate if you use Smith assumptions? What is the temporal coverage in the observations used to estimate the transport? Does the model suggest substantial seasonal variability that was not captured by the observational period? Check these issues for all passages when comparing with observed estimates.
Typo in “(Figure ??)”
In “we designed an experiment similar to that in Seijo-Ellis et al. (2023).” If the authors decide to keep this section, a very high percentage of potential readers will not know what was done in Seijo-Ellis et al. (2023). Please briefly describe what is reported in the next paragraph.
In “Figure 16a shows that the seasonal decrease in near-surface salinity is largely driven by near-surface horizontal salinity advection”, why just 1 year is shown? How do we know the same applies for other years? Maybe show the mean and standard deviation similar to Fig 15a?
In “Variability in salinity advection is associated with intrusions of Amazon river waters: salinity starts decreasing between May and June, as Amazon river plume waters arrive into the VIB (as indicated by the light blue colors in Figure 16c).” Now here, out of the blue, the author is talking about drifter trajectories. This is VERY confusing without knowing what was done in Seijo-Ellis et al. (2023). And all this to conclude that the results are consistent with that work. Why replicate work that was done and reported previously? All this space could be used to provide a more convincing model validation for tidal variability and sub-seasonal scales for instance.
Citation: https://doi.org/10.5194/egusphere-2024-1378-RC2 -
AC1: 'Reply on RC2', Giovanni Seijo-Ellis, 21 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1378/egusphere-2024-1378-AC1-supplement.pdf
-
AC1: 'Reply on RC2', Giovanni Seijo-Ellis, 21 Aug 2024
Status: closed
-
RC1: 'Comment on egusphere-2024-1378', Anonymous Referee #1, 02 Jul 2024
Seijo-Ellis et al. presented a 1/12-degree resolution regional ocean model, referred to as CARIB12, that encompasses the entire Caribbean Sea and Gulf of Mexico, built up in a recently developed CESM2-MOM6 modeling system. The authors provided an extended validation of CARIB12 against GLORYS reanalysis fields, as well as multiple observational datasets, showing that the model was able to simulate reasonably well long-term and seasonal averaged patterns in temperature, salinity, mixed layer depth, sea surface height, and other variables across the Caribbean Sea. Not surprisingly, they also showed that CARIB12 compares much better to observations than a 1-degree resolution CESM-POP model. This is a valuable modeling effort for the non-well studied Caribbean Sea, a region that contains critical Atlantic circulation pathways and hosts a rich marine biodiversity.
I have listed below four points that should be address before I can recommend publication:
1. Yucatan transport: You estimated a Yucatan transport of 20.6 Sv. It is important to include that value in Table 4 and compare with more recent estimated for the region. Although you cited a couple of studies that seems to be in good agreement with your model result (Sheinbaum et al. [2002] and Candela et al. [2003] with 23.08 and 23.06 Sv, respectively), a more recent study by Candela et al. (2019; DOI: 10.1175/JPO-D-18-0189.1) using an extended time series reported a Yucatan transport of about 27.6 Sv. The observed value is somewhat below the transport you can derive from GLORYS, which is about 29 Sv. This implies that CARIB12 underestimated the Yucatan transport by 25% or more. You should recognize that model bias, and maybe provide some discussion about what could be the reason of this underestimation.
2. Monthly climatologies for specific subregions: It could be interesting to compare monthly climatological patterns of salinity and temperature in specific subregions (either surface fields or vertical profiles). You calculated a monthly climatology in Figure 13, but that is an average for the entire interest region, which I think is not the right approach. Averaging over this large area could mask subregional biases, and you are also not discriminating important spatial variability that can be worth to describe for the region.
3. Interannual variability. Figures 14 and 15g-l revealed that the interannual transport variability is not well reproduced by the model (I disagree with the statement in lines 365-366 “While the mean inter-annual flows are well represented in CARIB12”). Does the Caribbean Current have a chaotic behavior? This should be further discussed. In addition, since the ability of the model to reproduce interannual variability is critical for the analysis of historical patterns, I wonder to what degree the model was able to simulate realistic interannual variability in temperature and salinity. Is it possible that you generate monthly time series of these variables for specific subregions and compare with observations or GLORYS?
4. Figure quality: The quality of the figure must be improved. You are not using any map projection to display the spatial patterns, and I think you should. I would also consider including some discretization in the colorbar to better discriminate spatial features. You could also evaluate merging several figures, like 3 and 4, 6 and 7, and 9 and 10. That may help to compare better the winter to summer changes in SSS, EKE, and MLD.
Additional suggestion:
Velocity patterns: In addition to the mean speed, I suggest you compare the mean velocity fields (u,v). That would add further insights about what circulation biases the model has. For example, see Figure 4 in Liu et al. (2015; http://dx.doi.org/10.1016/j.jmarsys.2015.01.007).
Section 3.15. I am not sure if this section is worth to include in the manuscript. This is not model validation and Figure 16 is little bit hard to interpret. Unless you have actual observations to compare the simulated trajectories, I would remove this section.
MOM6-NWA12: I wonder what difference in terms of configuration (beyond the model domain extension) has CARIB12 respect to the MOM6-NWA12 configured by Ross et al. (2024). Maybe, it could be worth to mention something about it.
Citation: https://doi.org/10.5194/egusphere-2024-1378-RC1 -
AC2: 'Reply on RC1', Giovanni Seijo-Ellis, 21 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1378/egusphere-2024-1378-AC2-supplement.pdf
-
AC2: 'Reply on RC1', Giovanni Seijo-Ellis, 21 Aug 2024
-
RC2: 'Comment on egusphere-2024-1378', Anonymous Referee #2, 02 Jul 2024
Review of “CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea” by Seijo-Ellis et al.
This manuscript describes and validates an ocean only CESM/MOM6 configuration of the Caribbean Sea. I think this work has the seeds for a work publishable in GMD but it needs some improvements before it can be accepted for publication. The validation is very basic and it could be substantially improved. Since this is a regional model, one would expect it to perform better than global models given the added physics and the parameter tuning to better fit the regional dynamics. Increased resolution is not a possibility here since the chosen resolution is the same as the model used to force at the boundaries, a choice that is not justified in the text. It seems therefore that one of the benefits in terms of physics is that the model has tides, which are not resolved in the global products, and some extra parameterizations such as that associated with (not resolved) submesoscale variability (just that associated with Mixed Layer Instability). Yet, none of these aspects are discussed in the manuscript. There is no validation of the model tides, and the effect of such variability on the seasonal and subseasonal variability is not provided. The same for the MLI parameterization. In addition, the validation of the model just focused on the seasonal variability, by comparing means and monthly means. This is a very low benchmark given how good models and forcing fields are at present time. There is a need for validation at subseasonal scales. In addition, some information seems unnecessary, such as the comparison with CESM-POP and the drifter section, while some other information is lacking, such as a more robust validation for the sub-seasonal component. A lot of weight is given to GLORYS12 as "the truth". The model has tides, which I'll assume makes the simulation much more realistic and differences are expected, so comparing with GLORYS12 might be misleading. Finally, given that a lot of the emphasis is on the Amazon and Orinoco river plumes, more discussion about this is needed. How realistic is GloFAS? how its seasonal cycle compares with the seasonal cycle derived from observations (such as the Dai dataset)? What is the effect of mixing by tides and the mixing parameterizations on the river plumes?
More specific details are provided next.
Section 2.1 Model Description
The model resolution is the same as GLORYS12, why not higher resolution?
In “layer z∗” what "layer z*" means? is this a hybrid coordinate?
In “baroclinic time step of 800 s”, Table 1 says it is 900 s
In “The final configuration was largely determined by the mean flows into the CS across the multiple passages between the Caribbean Islands.” It seems that the authors "optimized" the model configuration to better represent the inflow at the passages. Given the spatial domain of the model, why is this small area chosen as the benchmark? For instance, in the discussion about surface salinity several deficiencies are acknowledged regarding representation of the river plumes, and in the discussion about surface currents deficiencies in reproducing the mesoscale in the eastern CS is also acknowledged.
“The parameterization of Fox-Kemper et al. (2011) is implemented for the re-stratification of the mixed layer by sub-mesoscale eddies with a front length scale of 1500m.”
in “1500m” a space is needed here and in many other places in the text and tables
In “Table 2.” should be Table 1
Section 2.1.1 Initial and Open Boundary Conditions
In “The open boundary conditions are specified daily”, Table 1 says monthly means are used in the nudging layer, please clarify
In “Ten tidal constituents are specified at the boundaries.”. Given the huge spatial extent of this “regional” application, the direct astronomical forcing might be an important contributor to the tides. Why is the tidal potential force not included? There is a need for a robust validation for the tides in the model. How well are the harmonics at coastal stations and along the altimeter tracks resproduced?
In “Nudging layers for temperature, salinity and velocities are applied to minimize noise at the boundaries that may contaminate the interior” … “The layers are based on mean monthly fields from GLORYS12.” I am a bit confused here. The GLORYS12 fields are specified as daily means at the boundaries, but then nudged to monthly means within the nudging area delineated by the white dashed line in Fig 1, using a nudging strength of 0.3 days? I suspect that this very strong nudging to monthly means will kill most of the daily variability coming in from the boundary forcing. Please clarify.
In “JRA55-do” why is this forcing used? Has this product been favorably validated for the area when comparing with ERA5 or NCEP?
Section 2.2 Validation datasets
*The paper focus mostly on reproducing the seasonal variability. That is a very low benchmark.
In “Optimum Interpolation SST”. This is a ¼ deg resolution. There are many other higher resolution datasets. Any particular reason for using this in the validation?
Section 3 Results
In “Our focus is on time averaged fields, with the average computed for the full time series or specific seasons.” Monthly means comparison is not a very challenging metric. Why not provide a more ambitious goal such as spatial maps of correlations in time (and rms) using daily values, which are available for satellite analyses?
Section 3.1.1 Temperature and salinity
The figures need some work. The latitude and longitudes in panels b-e need to be included here and in other figures. I know they can be read from the left panel, but that is an unnecessary burden for the reader.
In “small cold bias of -0.15◦C” (Seijo-Ellis et al., 2024, p. 7) is this the spatial mean? please specify
“Notably, GLORYS12 appears to have a freshwater source in the region of the Dominican Republic and Puerto Rico resulting in positive biases within the CS”. Here and in many parts in the validation section, I wonder if the differences could be due to the absence of tides in GLORYS12? There is a strong need to validate the tides and show its effect on the model performance. After all, this is one of the main benefits of a regional application when comparing with global models (besides the increased resolution, which is not the case here).
In “the spread and extent of the plume waters is similar between CARIB12 and GLORYS12, but is much smaller in the gridded observations (particularly during the winter, Figure 3).” Could you comment on the spatial resolution and limitations of the SSS analysis? Is this a blend of SMOS and SMAP, plus available in-situ? If that is the case, there are well known limitations close to the coast, specially in highly populated areas due to radio frequency interference. Moreover, these products are usually provided with a map of the associated error. How big is the analysis error close to the coast? Maybe the differences you are discussing here fall within the expected error bounds.
In “In CARIB12, runoff is not distributed vertically in the ocean but rather spread horizontally across a maximum radius of 600 km with an e-fold decay scale of 200 km at the shallowest layer.” This is certainly a very strong limitation. Why is the radius is so big? Did the authors try smaller (i.e., more localized) values? why not? How sensitive is the spread of the river plumes to the vertical mixing parameterization used? This needs to be specified in the text.
Section 3.1.2 Surface currents speed, eddy kinetic energy and sea surface height
In “The biases are larger when comparing CARIB12 to the altimetry-based GOGSSH (mean bias of 3.8 cm/s), but it is worth noting the difference in spatial resolution and that the altimetry derived velocities are purely geostrophic which may not be a good approximation in this region (surface Ekman currents can reach 50 cm/s in parts of the Caribbean Basin, Andrade-Amaya, 2000)” Is this really true???? The authors do not specify exactly what product from Copernicus they are using, but the only global product I could find specifies: "The total velocity fields are obtained by combining CMEMS satellite Geostrophic surface currents and modelled Ekman currents at the surface and 15m depth (using ERA5 wind stress in REP and ERA5* in NRT)".
The product I am referring to is: https://data.marine.copernicus.eu/product/MULTIOBS_GLO_PHY_MYNRT_015_003/description
Please clarify and elaborate on this point in the manuscript.In “Caribbean Sea through the Colombian Basin” Could you provide a lat/lon range for this?
In “Figure 8 shows the winter and summer mean SSH for CARIB12 and GLORYS12”, was the mean SSH (a constant) within the area removed? please specify
In “The meridional extent of this feature extends further off-shore in CARIB12 indicating a wider Caribbean Current in CARIB12 compared to GLORYS12.”, please add the mean currents in both panels to show that is actually the case ...
A comparison for the SSH is made just for the Mean Dynamic Topography (MDT) and the seasonal values .....the authors could consider using the along track data re-processed for coastal applications
https://www.aviso.altimetry.fr/en/data/products/sea-surface-height-products/regional/x-track-sla/gulf-of-mexico-caribbean-sea.html
or
https://www.aviso.altimetry.fr/en/data/products/sea-surface-height-products/global/altimetry-innovative-coastal-approach-product-alticap.html
How are the maps of bias, rms and correlation along the tracks??
In “using the ∆0.03kg/m3 density criterion with respect to surface values in CARIB12 and GLORYS12, and with respect to a depth of 10 m in the deBoyer Montégut (2023) climatology.”, why not use the same definition for both cases (10 m reference)?In “overall deeper MLD in CARIB12”, could this be to the effect of tides on mixing?
In “small biases described for salinity”, I won't say the biases are small. Fig 4 suggests biases of up to 2 psu!
In “an overall positive salinity bias (Figures 3 and 4) corresponds to saltier waters in the near surface which leads to weaker vertical stratification (Figure 12) in the upper 0-100 m resulting in a deeper mixed layer particularly during the winter” I think vertical mixing by tides is another very possible candidate.
In “one must exercise caution in this comparison as the CARIB12 MLD is calculated using the ∆0.03kg/m3 criterion referenced to the shallowest layer in the model (1.25 m), while the deBoyer Montégut (2023) dataset is calculated referenced to 10 m 285 depth”, so, why not use 10 m instead?
Section 3.3 Vertical structure and water mass properties
This section needs considerable work. The authors mention that they "identified WOCE lines within the region of interest and specific cruises within the time frame of the simulation (2000-2020)". But here just the WOCE lines are discussed. In addition, the validation is reduced to a visual comparison of the PDFs as a function of depth (and the difference of the PDFs). Given the latitudinal extent of the WOCE lines I suppose there is substantial spatial variability from north to south, yet none of that is considered in the validation. And why not include all the Argo profiles available for the area in the validation of the model. A much more robust validation will be to provide correlation, bias and rms error as a function of depth for all the available profiles (WOCE lines, Argo floats, cruises). The same could be done along the WOCE transects to show where in latitude the deficiencies are.
In “observations along WOCE lines A20 and A22 within the region of interest”, there is a need for more information about this. How many occupations of the two lines between 2000-2020? are these transects during the summer, winter, both? ... If more than one season, does it make sense to combine them all in the same plot?
In “some of the largest differences occurring in the shallower Caribbean surface waters (Figure 11d-e)”, how is the reader supposed to know this mismatch is for the shallow areas? It will be better to show a comparison between the model and obs along one of the transects. Or even better, if there are several occupation of a transects, show a transect of the mean difference
In “Additional biases are noted around the salinity maximum”, again, how is the reader supposed to see that in figure 11?. A spatial map would be much more informative.
In “Figure 13” , the caption narrative is very confusing, not just for this figure. Please revise, and then ask a colleague not familiar with the work if it makes sense. Indicate the state variable in the title, such as CARIB12 temperature, GLORYS12 salinity, etc.
In “These results highlight CARIB12’s ability to correctly represent temperature and salinity variability in the CS as seasonal scales”, why these results show CARIB12 is better??? This is a model to model comparison, and in fact GLORYS12 has in-situ profiles of temperature and salinity assimilated, so in what sense CARIB12 is better????
In “We note that while transports based on CARIB12 and GLORYS12 are a time mean during 2000-2020, observational estimates are available only for shorter periods of time.” … “The mean transport across the Windward Passage between Cuba and Hispaniola is 1.91 Sv into the Caribbean Sea which is lower than that in GLORYS12 (2.8 Sv) and observational based estimates (3.8/3.6 Sv) by Smith et al. (2007)”: what is the minimal cross section area along this passage in the raw bathymetry vs that in INDO12 and GLORYS12? And also,how was the transport in Smith computed? Check in the model simulation for the assumptions made in Smith, such as extrapolation to the bottom and boundaries? Based on the model simulations, are these valid assumptions? What happens with the transport estimate if you use Smith assumptions? What is the temporal coverage in the observations used to estimate the transport? Does the model suggest substantial seasonal variability that was not captured by the observational period? Check these issues for all passages when comparing with observed estimates.
Typo in “(Figure ??)”
In “we designed an experiment similar to that in Seijo-Ellis et al. (2023).” If the authors decide to keep this section, a very high percentage of potential readers will not know what was done in Seijo-Ellis et al. (2023). Please briefly describe what is reported in the next paragraph.
In “Figure 16a shows that the seasonal decrease in near-surface salinity is largely driven by near-surface horizontal salinity advection”, why just 1 year is shown? How do we know the same applies for other years? Maybe show the mean and standard deviation similar to Fig 15a?
In “Variability in salinity advection is associated with intrusions of Amazon river waters: salinity starts decreasing between May and June, as Amazon river plume waters arrive into the VIB (as indicated by the light blue colors in Figure 16c).” Now here, out of the blue, the author is talking about drifter trajectories. This is VERY confusing without knowing what was done in Seijo-Ellis et al. (2023). And all this to conclude that the results are consistent with that work. Why replicate work that was done and reported previously? All this space could be used to provide a more convincing model validation for tidal variability and sub-seasonal scales for instance.
Citation: https://doi.org/10.5194/egusphere-2024-1378-RC2 -
AC1: 'Reply on RC2', Giovanni Seijo-Ellis, 21 Aug 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2024/egusphere-2024-1378/egusphere-2024-1378-AC1-supplement.pdf
-
AC1: 'Reply on RC2', Giovanni Seijo-Ellis, 21 Aug 2024
Data sets
Model configuration and input files for: "CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea" Giovanni G. Seijo-Ellis, Donata Giglio, Gustavo Marques, and Frank O. Bryan https://doi.org/10.5281/zenodo.11165668
Model output for: "CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea" Giovanni G. Seijo-Ellis, Donata Giglio, Gustavo Marques, and Frank O. Bryan https://doi.org/10.5281/zenodo.11264009
Trajectories of backtracked passive particles for: "CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea" Giovanni G. Seijo-Ellis and Donata Giglio https://doi.org/10.5281/zenodo.11267615
Model code and software
Model source code for CESM2 version cesm2_3_alpha16b as used in "CARIB12: A Regional Community Earth System Model / Modular Ocean Model 6 Configuration of the Caribbean Sea" Giovanni G. Seijo-Ellis, Donata Giglio, Gustavo Marques, and Frank O. Bryan https://doi.org/10.5281/zenodo.11289424
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
390 | 146 | 36 | 572 | 23 | 25 |
- HTML: 390
- PDF: 146
- XML: 36
- Total: 572
- BibTeX: 23
- EndNote: 25
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1