the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
GC Insights: Communicating Climate Change – Immersive Sonification for the Piano
Abstract. In order to convey climate change to a wider audience, I converted various CO2 records (parts per million) into music for the piano (scale notes) through the method of sonification. This is a data driven piece with five movements and includes musical elements such as tone, chords and key signatures, along with the data driven notes, providing a sonic experience of climate change and the acceleration of emissions. Because this composition can be played on the piano, it provides a level of immersion beyond a visual or auditory understanding, conveying the urgency of climate change to a broader audience in a new way.
- Preprint
(658 KB) - Metadata XML
-
Supplement
(210 KB) - BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on egusphere-2022-1356', Anonymous Referee #1, 04 Jan 2023
General comments:Thanks for your contribution. A common pitfall of this kind of musification is that the music winds up sounding similar. Basically, if CO2 or temperature is rising constantly and you link it to pitch, all the resulting music will sound roughly similar. However, by only linking one hand to the data while allowing the other to perform original music, you're doing something unique and you've managed to side-step that pitfall - so congratulations!I hadn't encountered the GC-Insights type or submission before, so I realise that some of my comments may not be addressed within the format this kind of article. For instance, the manuscript doesn't strictly follow a scientific article template, ie it has no results or discussion sections. I'll defer to the editor to confirm whether they are required for GC Insight publications.I'd like to see some comments on previous work on sonification of climate change data in your introduction. Typically, references don't contribute to the total word count, so you should be able to add as many as you'd like. Here are some starting points:
- Borromeo, L., Round, K., and Perera, J.: Climate Symphony, available at: https://www.disobedientfilms.com/climate-symphony 2016.
- Crawford, D.: Planetary Bands, Warming World string quartet, Video published by Ensia magazine, available at: https://vimeo.com/127083533, 2013
- the Climate Music Project (https://climatemusic.org/ )
- de Mora, L., Sellar, A. A., Yool, A., Palmieri, J., Smith, R. S., Kuhlbrodt, T., Parker, R. J., Walton, J., Blackford, J. C., and Jones, C. G.: Earth system music: music generated from the United Kingdom Earth System Model (UKESM1), Geosci. Commun., 3, 263–278, https://doi.org/10.5194/gc-3-263-2020, 2020.
It would be good for you to use these to highlight how your work is novel and different from previous approaches.The audio file:- There are no clear breaks between the five movements. Perhaps a fermata and a bar of rest between them might help separate each movement?
- The syncopation of the first movement makes it harder for me to perceive time passing. I think perhaps you could decouple the rhythm of the left and right hands such that the left hand is closely linked to CO2, but the right hand anchors the time signature. (This is an artistic choice so I leave it up to you whether this improves or deteriorates the piece.)
- I'm not a huge fan of the sound of this instrument - it sounds very dry and digital. Perhaps a different virtual instrument might produce a better sound - or alternatively you may be able to use some reverb and Eq? If a huge budget were available, then you may be able to find a local recording studio with some expensive microphones and a grand piano you could use to record your performance. Or maybe a pianist on a service like Fiverr could perform and record it for you?
I don't think that dropbox is the best place to keep a permanent record of this piece. The first place would be to append it to this article as a supplementary file. A scientific data repository might also be appropriate, something like zenodo or BODC, plus this would provide a DOI. As a backup, youtube or soundcloud or might also work for hosting, however it's not guaranteed that any of these companies will exist in ten years (including Dropbox).I'd like a section on how the recording was created as well. Did you program the MIDI and pass it to a virtual instrument or did you record a live performance? What instrument, microphone and interface (if any) were used? What VST have you used to generate the audio? Did you use a DAW, if you which one? Were any post-processed effects added? reverb, compression, delay etc. Was any mastering applied?The main criticism that I have of this draft is that the author does make quite a few unsupported statements in the abstract, introduction and conclusions. I've made some suggestions here, but I'd recommend a careful re-reading, to ensure that what is written is accurate, and not hyperbolic.A second criticism is that there's only one image permitted in Insight articles, so you really need the figure to shine. You could have one pane about the sonification method, one pane about the recording method, one about the data derivation. At the moment, this figure is not very clear and it would really be worth putting in the effort to make it great.On the whole, I'm happy with this as an Insight article, and I enjoyed the music.Specific Comments:Abstract:L11: remove (parts per million)L12: remove (scale notes)L12-:L15: This entire sentence should be replaced with a brief but explicit characterisation of your method. Something like "CO2 measurements from Mauna Loa were linked to musical pitch to drive the sonification, but additional musical parts were creatively composed to balance the piece, add nuance, emphasis, and emotion to the piece." (This is the part of your work that really stands out to me: it's not 100% data driven, and the musical freedom that you allowed yourself makes it stand out. It's worth emphasising this in the abstract! )L15: Because -> AsL16: I'm not sure this is true: "it provides a level of immersion beyond a visual or auditory understanding". However, I do agree that it certainly adds a sense of urgency and gloom to the data.Introduction:L20: If the goal of the project was to raise awareness of climate change, how do you do that? Have you tracked the number of listeners or shown where they came from? Were they already aware of climate change? To me, it looks like the goal was to generate and share a piece of music based on climate data.L21: CO2 isn't an indicator of climate change - it's one of the main causes.L23: Climate change is pretty well established at this stage. right?L25: remove "mathematically"L26: remove " that are playable on the Piano"L27-L29: This is unsupported.L29: remove "out"L29: Is this really a new type of sonification? There is definitely a precedent of other people combining data and musical choices.L30: I don't understand how statistics got involved here or what is meant by statistically accurate? These are specific terms that don't fit this context. I recommend changing this to: "combines climate data and creativity", and "musical piece that is data driven"Sonification Use and EffectL34: " auditory display:" (replace , with :)L35: remove " high index (" and following ")"L47: remove "to those that are less able"L48-50: unsupported statement.L52: What do you mean type of instrument? I only hear a piano.L52: Might be worth reading and references Flowers 2005 here. The key thing to note is that it's actually quite hard to get a lot of information out of sound, especially as with a single instrument you can't modify the tone, and it's challenging to perceive small fluctuations in amplitude. (Flowers, J. H.: Thirteen years of reflection on auditory graphing: promises, pitfalls and potential new directions, Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display, Limerick, Ireland, 6–9 July, 406–409, 2005, http://sonify.psych.gatech.edu/ags2005/pdf/AGS05_Flowers.pdf )Figure 1: This figure is not very clear to me. Did you use monthly or annual data? Why are movements 1 and 2 shown as straight lines, but movement four is segmented? Third movement uses monthly data? I think you would be better served by having five panes, one for each movement, and showing the Mauna Loa monthly data in black, and the values that you used to drive the modification as separate coloured lines.L55: this isn't really the methodology, it shows which sections of the data were used by the sonification.L56: you don't need the link to the dropbox file here.Methodology: Numbers to Notes:L62: remove "basic":L63: I've never heard of a " common musical backbone". Can you elaborate on what this means?L72: We typically use "annual" instead of "yearly", but as this is the title of the movement, it's an artistic decision.L72: For this and the other movements, please indicate at what timestamp they begin in the recorded piece.L82: " and the value had to exceed the closest note value, promoting positive change": What does this mean - can you make it clearer, please?L98: Decade -> DecadalL109: Is there any reason why you fitted to recent data rather than using established CO2 projections (SSP5-8.5 or even RCP8.5 would both be appropriate. ) Ultimately, I suspect the difference is small, but you may reach a wider audience using these well established projections.L124: uniquely playable -> unique and playableL124: piano song -> piece for pianoL126: song -> pieceEthical statementThe ethical statement should be after the conclusions.Conclusions:L129: "only available in English": I don't think that Mauna Loa data is in English! It's just Arabic numbers!L130: This is a bit of a bold statement: "anyone in the world can understand, regardless of what language they speak". It's not clear to me that it's true. I'm not sure that this piece would make sense if you just heard the music. In order for it to mak sense, it needs to be explained in context that it is derived from climate data.L128-130: To be honest, I think you can safely remove the first two sentences of this paragraph.L132: "providing a unique musical and scientific experience." While this is indeed a unique experience, it's not what I would focus on here in the conclusions.I'd like to see some suggestions on potential improvements. Ie, alternative datasets, audience survey, etc. See for instance de Mora et al, mentioned above.Supplement:Table: Please add a caption or a description of the table.Sheet music:- Please add the tempo
- Please add the instrument (piano)
- You may want to add notation of when to hold and release the pedal.
- Please indicate where each of the five movements begins and ends. I'd recommend a double bar line at the end of each movement. as well as the title of the moment (ie Movement one: 40 years of yearly increase).
- This would also be a good opportunity to clarify where data came from directly in the music. Ie notes on the pdf statring "right hand plays annual mean CO2 from 1960-2015' or similar.
Citation: https://doi.org/10.5194/egusphere-2022-1356-RC1 -
AC1: 'Reply on RC1', Charles Conrad, 14 Feb 2023
This review is very constructive and helpful towards improving my paper. I have completed some of the more straightforward changes, and have described the changes that I will complete in response to Referee 1's comments. This is all included in the attached file where I respond to each improvement.
A challenge when completing these improvements is the word count. I would like to refer to the editor on the specific requirements and flexibility within the GC Insight format. It is possible for me to use the supplemental to add additional changes. This is mentioned in the provided response pdf.
I would like to thank the referee for their detailed response and extremely constructive comments. I am optimistic about the future of this manuscript.
-
RC2: 'Comment on egusphere-2022-1356', Anonymous Referee #2, 17 Jan 2023
General comments
The author proposes a musically-oriented sonification of CO2 concentration data. The musical piece is partly data-driven through the method of "parameter mapping", but some aspects of it (chords, dynamics) are based on a musical decision. This can be thought of as some form of data-based composition or musical arrangement. The intent is to facilitate the communication of such important data to the general public through an engaging musical experience.
The musical piece itself is interesting and quite intriguing to my ears, due to the very "chromatic" approach taken here and the way major/minor chords are used and combined. Moving beyond a purely data-based sonification and using the freedom brought by musical composition is an interesting aspect of this work in my opinion: it is both powerful (because the composer can induce intent and emotion) and challenging (because the link to the data might become weaker).
In terms of objective, I agree with the author that musically-oriented sonification has a great potential to communicate data and concepts to the general audience. One specific challenge I can foresee for this piece is that the 5-movement structure makes the music "play the data" several times, but for different time periods and data resolution (yearly/monthly): this is difficult to follow based on the audio only. The figure provided in the paper is helpful in this respect, but I believe some form of data animation would be much more efficient - maybe a future improvement worth discussing?
The main criticism I have relates to the way the paper is written: it could significantly be improved in my opinion. If this paper was to be considered as a regular scientific paper, the main issues would be the following:
1. A review of previous works is missing in the introduction, which makes it difficult to understand how the author's piece compares to existing works.
2. Explanations on how the musical piece was created and arranged are not very clear in my opinion.
3. There is no discussion on the challenges faced to create the piece, possible improvements, future works etc.I understand that is not a regular scientific paper, but still, I think improvements in the three areas above would be useful to the reader.
Specific comments
Throughout the paper: Climate change and CO2 concentrations are two distinct things - the latter is one the main driver of the former. This distinction could be made more explicitly in the title (e.g. "Communicating the Causes of Climate Change" or something along those lines?) and throughout the paper. As an illustration on line 22, I don't think CO2 concentrations can be termed an "indicator of climate change": global temperature (for instance) would be. In fact, an interesting sonification experiment would be to play both the cause (CO2 emission) and the consequence (increasing temperature) together: worth a word in the discussion?Lines 29-31: could the author explain a bit more the choice of using the adjective "statistical" ? Is it only because the process is based on data, or is there additional intended meaning? Note that I'm not disputing this choice: sonification is a way of presenting data, and that's indeed part of Statistics, but it might be worth stating this explicitly because I reckon some readers might have a narrower interpretation of this term.
Lines 20-31: this introduction does not do any literature review. To start with, I think the sonification handbook (https://sonification.de/handbook/) is worth citing for all technical aspects behind sonification, and also possibly for discussing topics related to auditory perception. In addition, the author could mention other sonification works and discuss similarities and differences with the presented piece. The list below is not exhaustive but provides a few examples for which I found similarities with the author's piece (in terms of either underlying data, creation of a physically playable piece or inclusion of subjective composition elements).
* CO2 concentration and increasing temperatures: https://youtu.be/ONuA9HmkF3M
* Increasing temperatures: https://youtu.be/-V2Uc8Kax_g
* Climate change projections: https://youtu.be/2YE9uHBE5OI
* Climate change: https://www.nelsonguda.com/project/threshold/
* Sea ice loss: https://youtu.be/eYXxAE5grRQ
* Climate data: https://www.jamieperera.com/climate-data-sonification
* Climate data: https://globxblog.github.io/
* Coastal Land Loss : https://datadrivendj.com/tracks/louisiana/
* Other examples in other fields at https://sonification.design/Line 35: in the sonification handbook (https://sonification.de/handbook/), this is rather termed "indexicality" I believe. In any case I think the author should introduce such concept more thoroughly ("high sonification accuracy to original data" is a bit unclear).
Line 36: I don't understand what the words "set" and "boundaries" refer to here. Are they elements of the parameter mapping approach used for the right hand? As previously, the author should probably use more space to introduce all these concepts more clearly.
Line 52: explain a bit more what each of these 6 elements represent?
* linear time: what do you mean exactly?
* varying length of certain notes: OK but isn't somehow redundant with rhythm, and if not what's the distinction?
* frequency, amplitude: maybe mention that they are also called pitch and loudness?Line 53: this might be a bit of a controversial statement, but for sure you could state that it carries the formation in an original and engaging way, different from a graph.
Line 63: What does the "musical backbone" refer to? Key and time signature? please make it explicit. Moreover, it might also be worth explaining that while the score is written in C major, the parameter mapping is not restricted to the 7 notes of the C major scale, but uses all 12 semitones.
Line 65: The expression "range differentiation" sounds unusual: is it widely used in sonification or in other fields (if so please provide a reference)? "Discretization" sounds more familiar to my ears (https://en.wikipedia.org/wiki/Discretization_of_continuous_features). In any case, the sentence explaining how it's done needs rewording: I guess you meant the difference between the highest and the lowest values?
Table S1: the meaning of "Increase of 1" in column "Number of Half Notes" is unclear - why not just give the total number of half-notes used in the parameter mapping, as for other rows? The header "Calculated Interval" is also unclear - maybe "data range covered by one note", if I understood correctly?
Lines 72-115: I have to say that I found this description quite confusing and I struggled to understand how the parameter mapping was done exactly, and to distinguish between the "objective" data-driven choices and the "subjective" composition choices. A few suggestions:
* Maybe the author could adopt a more systematic 2-step structure to describe each movement. Step 1 would be the data-driven parameter mapping: this description should be clear enough to enable a reader to reproduce this part of the score. Step 2 would the composition choices, with the author explaining the intent.
* I think that both the score and the audio file play the movements described here in the following order: 3-1-2-4-5. Is it correct? If so the movements need to be renamed in the correct order because this is quite confusing in the current state. In any case the author should give the bar numbers associated with each movement.
* If bars 1-8 indeed correspond to movement 3, then I would expect finding 5*12=60 notes (5 years of monthly values). How can these be split into 8 measures? (60/8=7.5). I'm missing something here.
* I don't understand how the rhythm of bars 47-58 was obtained: is it data-driven or has it been chosen by the author? please clarify.
* Is the loudness of the notes somehow data-driven, or is it the author's choice? It's worth explaining for each movement since there are strong dynamics for some of them but not for others - and in terms of perception, dynamics is quite powerful!Line 110: I expect scientists working on future greenhouse gases emission and concentrations might not be keen on this best-fit line projection. Why not using the CO2 concentrations for one of the IPCC scenarios? The data are available at this page (probably the same page where the author get the Mauna Loa data): https://www.ipcc-data.org/observ/ddc_co2.html
Line 128-130: this sounds a bit awkward and controversial: the data themselves are numbers, they are not "in English", and some may argue that the language of mathematics is also quite universal, maybe even more than music! In addition, the explanations surrounding these data are indeed in English, but so are the explanations given by the author in this paper. I'm personally not convinced that the point of such data-based musical piece should be to make data more understandable by anyone in the world. In my eyes, its key strength is rather to add an emotional content that is quite unique to music.Line 133: many sonification experiments are physically playable: for just two examples among others, see https://youtu.be/-V2Uc8Kax_g and https://youtu.be/eYXxAE5grRQ
On the score in table S1:
1. Explicitly identifying the movements would be very useful
2. Wouldn't it be more logical to adapt the time signature and possibly the tempo to the data? 4/4 sounds as logical as any other choice for yearly data, but for decadal averages why not using e.g. 10/4? and e.g. 12/4 for monthly data? (or 12/8 or 12/16? see previous comment on the 5*12 notes of movement 3).
3. A sustain pedal is mentioned for movement 5, it could be added to the score?Lines 122-134: I think the author could include a more thorough discussion on both the creation of this piece and any future related work. A few possible discussion topics:
* feedback on the process of turning a fully data-based sonification into a musical arrangement? What were the main challenges faced by the author?
* in many musically-oriented sonification experiments, pitch mapping is restricted to the notes of a particular scale (e.g. C major, E minor etc.). Here the author uses all 12 semitones, which makes it sounds very "chromatic" and quite intriguing. Any feedback on this "chromatic" approach? (motivation, difficulties, etc.)
* any plan to play it live on the piano, since this is mentioned as a original aspect?
* potential interest of complementing the audio with some data visualization/animation or some visual art approach to make such data even more understandable by anyone, irrespective of language?
* Any plan to communicate further on this piece, beyond this paper? (e.g. website, video, blog, art/science/communication events, etc.)
* interest of using more musical instruments?
* etc.Technical comments
Although I'm not a native English speaker, many sentences sound a bit awkward to me: the paper may benefit from an editorial review on this aspect.
Citation: https://doi.org/10.5194/egusphere-2022-1356-RC2 -
AC2: 'Reply on RC2', Charles Conrad, 14 Feb 2023
I greatly appreciate this thorough review. In similar fashion to the comments from RC1, I have completed some of the more straightforward changes, and have described the changes that I will complete in response to each of Referee 2's comments in an attached pdf file.
Once again, I inquire as to the word count specifications of a GC Insight and refer to the dditor for such a decision.
To RC2, thank you for the detailed review that will surely improve my work and writing.
-
AC2: 'Reply on RC2', Charles Conrad, 14 Feb 2023
Status: closed
-
RC1: 'Comment on egusphere-2022-1356', Anonymous Referee #1, 04 Jan 2023
General comments:Thanks for your contribution. A common pitfall of this kind of musification is that the music winds up sounding similar. Basically, if CO2 or temperature is rising constantly and you link it to pitch, all the resulting music will sound roughly similar. However, by only linking one hand to the data while allowing the other to perform original music, you're doing something unique and you've managed to side-step that pitfall - so congratulations!I hadn't encountered the GC-Insights type or submission before, so I realise that some of my comments may not be addressed within the format this kind of article. For instance, the manuscript doesn't strictly follow a scientific article template, ie it has no results or discussion sections. I'll defer to the editor to confirm whether they are required for GC Insight publications.I'd like to see some comments on previous work on sonification of climate change data in your introduction. Typically, references don't contribute to the total word count, so you should be able to add as many as you'd like. Here are some starting points:
- Borromeo, L., Round, K., and Perera, J.: Climate Symphony, available at: https://www.disobedientfilms.com/climate-symphony 2016.
- Crawford, D.: Planetary Bands, Warming World string quartet, Video published by Ensia magazine, available at: https://vimeo.com/127083533, 2013
- the Climate Music Project (https://climatemusic.org/ )
- de Mora, L., Sellar, A. A., Yool, A., Palmieri, J., Smith, R. S., Kuhlbrodt, T., Parker, R. J., Walton, J., Blackford, J. C., and Jones, C. G.: Earth system music: music generated from the United Kingdom Earth System Model (UKESM1), Geosci. Commun., 3, 263–278, https://doi.org/10.5194/gc-3-263-2020, 2020.
It would be good for you to use these to highlight how your work is novel and different from previous approaches.The audio file:- There are no clear breaks between the five movements. Perhaps a fermata and a bar of rest between them might help separate each movement?
- The syncopation of the first movement makes it harder for me to perceive time passing. I think perhaps you could decouple the rhythm of the left and right hands such that the left hand is closely linked to CO2, but the right hand anchors the time signature. (This is an artistic choice so I leave it up to you whether this improves or deteriorates the piece.)
- I'm not a huge fan of the sound of this instrument - it sounds very dry and digital. Perhaps a different virtual instrument might produce a better sound - or alternatively you may be able to use some reverb and Eq? If a huge budget were available, then you may be able to find a local recording studio with some expensive microphones and a grand piano you could use to record your performance. Or maybe a pianist on a service like Fiverr could perform and record it for you?
I don't think that dropbox is the best place to keep a permanent record of this piece. The first place would be to append it to this article as a supplementary file. A scientific data repository might also be appropriate, something like zenodo or BODC, plus this would provide a DOI. As a backup, youtube or soundcloud or might also work for hosting, however it's not guaranteed that any of these companies will exist in ten years (including Dropbox).I'd like a section on how the recording was created as well. Did you program the MIDI and pass it to a virtual instrument or did you record a live performance? What instrument, microphone and interface (if any) were used? What VST have you used to generate the audio? Did you use a DAW, if you which one? Were any post-processed effects added? reverb, compression, delay etc. Was any mastering applied?The main criticism that I have of this draft is that the author does make quite a few unsupported statements in the abstract, introduction and conclusions. I've made some suggestions here, but I'd recommend a careful re-reading, to ensure that what is written is accurate, and not hyperbolic.A second criticism is that there's only one image permitted in Insight articles, so you really need the figure to shine. You could have one pane about the sonification method, one pane about the recording method, one about the data derivation. At the moment, this figure is not very clear and it would really be worth putting in the effort to make it great.On the whole, I'm happy with this as an Insight article, and I enjoyed the music.Specific Comments:Abstract:L11: remove (parts per million)L12: remove (scale notes)L12-:L15: This entire sentence should be replaced with a brief but explicit characterisation of your method. Something like "CO2 measurements from Mauna Loa were linked to musical pitch to drive the sonification, but additional musical parts were creatively composed to balance the piece, add nuance, emphasis, and emotion to the piece." (This is the part of your work that really stands out to me: it's not 100% data driven, and the musical freedom that you allowed yourself makes it stand out. It's worth emphasising this in the abstract! )L15: Because -> AsL16: I'm not sure this is true: "it provides a level of immersion beyond a visual or auditory understanding". However, I do agree that it certainly adds a sense of urgency and gloom to the data.Introduction:L20: If the goal of the project was to raise awareness of climate change, how do you do that? Have you tracked the number of listeners or shown where they came from? Were they already aware of climate change? To me, it looks like the goal was to generate and share a piece of music based on climate data.L21: CO2 isn't an indicator of climate change - it's one of the main causes.L23: Climate change is pretty well established at this stage. right?L25: remove "mathematically"L26: remove " that are playable on the Piano"L27-L29: This is unsupported.L29: remove "out"L29: Is this really a new type of sonification? There is definitely a precedent of other people combining data and musical choices.L30: I don't understand how statistics got involved here or what is meant by statistically accurate? These are specific terms that don't fit this context. I recommend changing this to: "combines climate data and creativity", and "musical piece that is data driven"Sonification Use and EffectL34: " auditory display:" (replace , with :)L35: remove " high index (" and following ")"L47: remove "to those that are less able"L48-50: unsupported statement.L52: What do you mean type of instrument? I only hear a piano.L52: Might be worth reading and references Flowers 2005 here. The key thing to note is that it's actually quite hard to get a lot of information out of sound, especially as with a single instrument you can't modify the tone, and it's challenging to perceive small fluctuations in amplitude. (Flowers, J. H.: Thirteen years of reflection on auditory graphing: promises, pitfalls and potential new directions, Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display, Limerick, Ireland, 6–9 July, 406–409, 2005, http://sonify.psych.gatech.edu/ags2005/pdf/AGS05_Flowers.pdf )Figure 1: This figure is not very clear to me. Did you use monthly or annual data? Why are movements 1 and 2 shown as straight lines, but movement four is segmented? Third movement uses monthly data? I think you would be better served by having five panes, one for each movement, and showing the Mauna Loa monthly data in black, and the values that you used to drive the modification as separate coloured lines.L55: this isn't really the methodology, it shows which sections of the data were used by the sonification.L56: you don't need the link to the dropbox file here.Methodology: Numbers to Notes:L62: remove "basic":L63: I've never heard of a " common musical backbone". Can you elaborate on what this means?L72: We typically use "annual" instead of "yearly", but as this is the title of the movement, it's an artistic decision.L72: For this and the other movements, please indicate at what timestamp they begin in the recorded piece.L82: " and the value had to exceed the closest note value, promoting positive change": What does this mean - can you make it clearer, please?L98: Decade -> DecadalL109: Is there any reason why you fitted to recent data rather than using established CO2 projections (SSP5-8.5 or even RCP8.5 would both be appropriate. ) Ultimately, I suspect the difference is small, but you may reach a wider audience using these well established projections.L124: uniquely playable -> unique and playableL124: piano song -> piece for pianoL126: song -> pieceEthical statementThe ethical statement should be after the conclusions.Conclusions:L129: "only available in English": I don't think that Mauna Loa data is in English! It's just Arabic numbers!L130: This is a bit of a bold statement: "anyone in the world can understand, regardless of what language they speak". It's not clear to me that it's true. I'm not sure that this piece would make sense if you just heard the music. In order for it to mak sense, it needs to be explained in context that it is derived from climate data.L128-130: To be honest, I think you can safely remove the first two sentences of this paragraph.L132: "providing a unique musical and scientific experience." While this is indeed a unique experience, it's not what I would focus on here in the conclusions.I'd like to see some suggestions on potential improvements. Ie, alternative datasets, audience survey, etc. See for instance de Mora et al, mentioned above.Supplement:Table: Please add a caption or a description of the table.Sheet music:- Please add the tempo
- Please add the instrument (piano)
- You may want to add notation of when to hold and release the pedal.
- Please indicate where each of the five movements begins and ends. I'd recommend a double bar line at the end of each movement. as well as the title of the moment (ie Movement one: 40 years of yearly increase).
- This would also be a good opportunity to clarify where data came from directly in the music. Ie notes on the pdf statring "right hand plays annual mean CO2 from 1960-2015' or similar.
Citation: https://doi.org/10.5194/egusphere-2022-1356-RC1 -
AC1: 'Reply on RC1', Charles Conrad, 14 Feb 2023
This review is very constructive and helpful towards improving my paper. I have completed some of the more straightforward changes, and have described the changes that I will complete in response to Referee 1's comments. This is all included in the attached file where I respond to each improvement.
A challenge when completing these improvements is the word count. I would like to refer to the editor on the specific requirements and flexibility within the GC Insight format. It is possible for me to use the supplemental to add additional changes. This is mentioned in the provided response pdf.
I would like to thank the referee for their detailed response and extremely constructive comments. I am optimistic about the future of this manuscript.
-
RC2: 'Comment on egusphere-2022-1356', Anonymous Referee #2, 17 Jan 2023
General comments
The author proposes a musically-oriented sonification of CO2 concentration data. The musical piece is partly data-driven through the method of "parameter mapping", but some aspects of it (chords, dynamics) are based on a musical decision. This can be thought of as some form of data-based composition or musical arrangement. The intent is to facilitate the communication of such important data to the general public through an engaging musical experience.
The musical piece itself is interesting and quite intriguing to my ears, due to the very "chromatic" approach taken here and the way major/minor chords are used and combined. Moving beyond a purely data-based sonification and using the freedom brought by musical composition is an interesting aspect of this work in my opinion: it is both powerful (because the composer can induce intent and emotion) and challenging (because the link to the data might become weaker).
In terms of objective, I agree with the author that musically-oriented sonification has a great potential to communicate data and concepts to the general audience. One specific challenge I can foresee for this piece is that the 5-movement structure makes the music "play the data" several times, but for different time periods and data resolution (yearly/monthly): this is difficult to follow based on the audio only. The figure provided in the paper is helpful in this respect, but I believe some form of data animation would be much more efficient - maybe a future improvement worth discussing?
The main criticism I have relates to the way the paper is written: it could significantly be improved in my opinion. If this paper was to be considered as a regular scientific paper, the main issues would be the following:
1. A review of previous works is missing in the introduction, which makes it difficult to understand how the author's piece compares to existing works.
2. Explanations on how the musical piece was created and arranged are not very clear in my opinion.
3. There is no discussion on the challenges faced to create the piece, possible improvements, future works etc.I understand that is not a regular scientific paper, but still, I think improvements in the three areas above would be useful to the reader.
Specific comments
Throughout the paper: Climate change and CO2 concentrations are two distinct things - the latter is one the main driver of the former. This distinction could be made more explicitly in the title (e.g. "Communicating the Causes of Climate Change" or something along those lines?) and throughout the paper. As an illustration on line 22, I don't think CO2 concentrations can be termed an "indicator of climate change": global temperature (for instance) would be. In fact, an interesting sonification experiment would be to play both the cause (CO2 emission) and the consequence (increasing temperature) together: worth a word in the discussion?Lines 29-31: could the author explain a bit more the choice of using the adjective "statistical" ? Is it only because the process is based on data, or is there additional intended meaning? Note that I'm not disputing this choice: sonification is a way of presenting data, and that's indeed part of Statistics, but it might be worth stating this explicitly because I reckon some readers might have a narrower interpretation of this term.
Lines 20-31: this introduction does not do any literature review. To start with, I think the sonification handbook (https://sonification.de/handbook/) is worth citing for all technical aspects behind sonification, and also possibly for discussing topics related to auditory perception. In addition, the author could mention other sonification works and discuss similarities and differences with the presented piece. The list below is not exhaustive but provides a few examples for which I found similarities with the author's piece (in terms of either underlying data, creation of a physically playable piece or inclusion of subjective composition elements).
* CO2 concentration and increasing temperatures: https://youtu.be/ONuA9HmkF3M
* Increasing temperatures: https://youtu.be/-V2Uc8Kax_g
* Climate change projections: https://youtu.be/2YE9uHBE5OI
* Climate change: https://www.nelsonguda.com/project/threshold/
* Sea ice loss: https://youtu.be/eYXxAE5grRQ
* Climate data: https://www.jamieperera.com/climate-data-sonification
* Climate data: https://globxblog.github.io/
* Coastal Land Loss : https://datadrivendj.com/tracks/louisiana/
* Other examples in other fields at https://sonification.design/Line 35: in the sonification handbook (https://sonification.de/handbook/), this is rather termed "indexicality" I believe. In any case I think the author should introduce such concept more thoroughly ("high sonification accuracy to original data" is a bit unclear).
Line 36: I don't understand what the words "set" and "boundaries" refer to here. Are they elements of the parameter mapping approach used for the right hand? As previously, the author should probably use more space to introduce all these concepts more clearly.
Line 52: explain a bit more what each of these 6 elements represent?
* linear time: what do you mean exactly?
* varying length of certain notes: OK but isn't somehow redundant with rhythm, and if not what's the distinction?
* frequency, amplitude: maybe mention that they are also called pitch and loudness?Line 53: this might be a bit of a controversial statement, but for sure you could state that it carries the formation in an original and engaging way, different from a graph.
Line 63: What does the "musical backbone" refer to? Key and time signature? please make it explicit. Moreover, it might also be worth explaining that while the score is written in C major, the parameter mapping is not restricted to the 7 notes of the C major scale, but uses all 12 semitones.
Line 65: The expression "range differentiation" sounds unusual: is it widely used in sonification or in other fields (if so please provide a reference)? "Discretization" sounds more familiar to my ears (https://en.wikipedia.org/wiki/Discretization_of_continuous_features). In any case, the sentence explaining how it's done needs rewording: I guess you meant the difference between the highest and the lowest values?
Table S1: the meaning of "Increase of 1" in column "Number of Half Notes" is unclear - why not just give the total number of half-notes used in the parameter mapping, as for other rows? The header "Calculated Interval" is also unclear - maybe "data range covered by one note", if I understood correctly?
Lines 72-115: I have to say that I found this description quite confusing and I struggled to understand how the parameter mapping was done exactly, and to distinguish between the "objective" data-driven choices and the "subjective" composition choices. A few suggestions:
* Maybe the author could adopt a more systematic 2-step structure to describe each movement. Step 1 would be the data-driven parameter mapping: this description should be clear enough to enable a reader to reproduce this part of the score. Step 2 would the composition choices, with the author explaining the intent.
* I think that both the score and the audio file play the movements described here in the following order: 3-1-2-4-5. Is it correct? If so the movements need to be renamed in the correct order because this is quite confusing in the current state. In any case the author should give the bar numbers associated with each movement.
* If bars 1-8 indeed correspond to movement 3, then I would expect finding 5*12=60 notes (5 years of monthly values). How can these be split into 8 measures? (60/8=7.5). I'm missing something here.
* I don't understand how the rhythm of bars 47-58 was obtained: is it data-driven or has it been chosen by the author? please clarify.
* Is the loudness of the notes somehow data-driven, or is it the author's choice? It's worth explaining for each movement since there are strong dynamics for some of them but not for others - and in terms of perception, dynamics is quite powerful!Line 110: I expect scientists working on future greenhouse gases emission and concentrations might not be keen on this best-fit line projection. Why not using the CO2 concentrations for one of the IPCC scenarios? The data are available at this page (probably the same page where the author get the Mauna Loa data): https://www.ipcc-data.org/observ/ddc_co2.html
Line 128-130: this sounds a bit awkward and controversial: the data themselves are numbers, they are not "in English", and some may argue that the language of mathematics is also quite universal, maybe even more than music! In addition, the explanations surrounding these data are indeed in English, but so are the explanations given by the author in this paper. I'm personally not convinced that the point of such data-based musical piece should be to make data more understandable by anyone in the world. In my eyes, its key strength is rather to add an emotional content that is quite unique to music.Line 133: many sonification experiments are physically playable: for just two examples among others, see https://youtu.be/-V2Uc8Kax_g and https://youtu.be/eYXxAE5grRQ
On the score in table S1:
1. Explicitly identifying the movements would be very useful
2. Wouldn't it be more logical to adapt the time signature and possibly the tempo to the data? 4/4 sounds as logical as any other choice for yearly data, but for decadal averages why not using e.g. 10/4? and e.g. 12/4 for monthly data? (or 12/8 or 12/16? see previous comment on the 5*12 notes of movement 3).
3. A sustain pedal is mentioned for movement 5, it could be added to the score?Lines 122-134: I think the author could include a more thorough discussion on both the creation of this piece and any future related work. A few possible discussion topics:
* feedback on the process of turning a fully data-based sonification into a musical arrangement? What were the main challenges faced by the author?
* in many musically-oriented sonification experiments, pitch mapping is restricted to the notes of a particular scale (e.g. C major, E minor etc.). Here the author uses all 12 semitones, which makes it sounds very "chromatic" and quite intriguing. Any feedback on this "chromatic" approach? (motivation, difficulties, etc.)
* any plan to play it live on the piano, since this is mentioned as a original aspect?
* potential interest of complementing the audio with some data visualization/animation or some visual art approach to make such data even more understandable by anyone, irrespective of language?
* Any plan to communicate further on this piece, beyond this paper? (e.g. website, video, blog, art/science/communication events, etc.)
* interest of using more musical instruments?
* etc.Technical comments
Although I'm not a native English speaker, many sentences sound a bit awkward to me: the paper may benefit from an editorial review on this aspect.
Citation: https://doi.org/10.5194/egusphere-2022-1356-RC2 -
AC2: 'Reply on RC2', Charles Conrad, 14 Feb 2023
I greatly appreciate this thorough review. In similar fashion to the comments from RC1, I have completed some of the more straightforward changes, and have described the changes that I will complete in response to each of Referee 2's comments in an attached pdf file.
Once again, I inquire as to the word count specifications of a GC Insight and refer to the dditor for such a decision.
To RC2, thank you for the detailed review that will surely improve my work and writing.
-
AC2: 'Reply on RC2', Charles Conrad, 14 Feb 2023
Data sets
Global Monitoring Laboratory, Trends in Atmospheric Carbon Dioxide Earth System Research Laboratories https://www.esrl.noaa.gov/gmd/ccgg/trends/
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
512 | 140 | 41 | 693 | 102 | 40 | 23 |
- HTML: 512
- PDF: 140
- XML: 41
- Total: 693
- Supplement: 102
- BibTeX: 40
- EndNote: 23
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1