Climate change is a global process that affects everyone on Earth. However, until recently, the tools used for climate change research, and the data driving and produced by those studies, was accessible to a small portion of the population. Today, government agencies increasingly make weather and climate data open and available to the public, and research institutions, such as the CI's Center for Robust Decision Making on Climate and Energy Policy (RDCEP), develop and release user-friendly computational tools for working with that information. With this new access, all that’s left is familiarizing ourselves with the data and the tools.
Last week, as part of RDCEP’s Discovery Engines: Under the Hood workshop series, David Kelly and Michael Glotter taught an audience of researchers and students from diverse backgrounds about climate data access and use.
Glotter's talk began by covering the different types of data and their pros or cons. How can we get our hands on this data ourselves? What can we do with it if we are researchers, policymakers, or just curious laypeople?
Before getting ahead of ourselves and downloading data, he said, we should stop and consider a few things. Which time period, geographical domain, and scale are we interested in? How concerned are we with uncertainty?
Like species in the rainforest, data types are diverse, varying in accuracy, completeness, resolution and other variables. We must decide which type to use on a case-by-case basis, always keeping in mind the limitations of our data.
For instance, all data is not created equal. Data is a collection of real-world observations that we input into a model. What comes out at the other end, the future projections, are not data in the strictest sense of the word. Our best guess at the future is not the same as observations of the world.
Once someone has acquired the data they're interested in, what can be done? Many of the models and datasets used by climate change researchers are too large and complex for people to use on their own computers. But combining cloud computing and an online workflow platform called Galaxy, RDCEP has created a computational tool called FACE-IT to expand access to climate science.
FACE-IT allows laypeople and policymakers to tinker with data as if they were experts, using a workflow they build themselves in an online environment. A workflow is like a bread machine. By inputting the ingredients and choosing the steps, you can create a loaf of bread without carrying out the steps yourself. Without understanding the intricacies of how the data is used or how a model is run, using FACE-IT’s workflow, you can run analyses and create visualizations of climate and its impact upon agriculture, economics, and other sectors.
At the Discovery Engines workshop, David Kelly walked attendees through a demo of the FACE-IT platform. In this demo, the inputs were crop and climate data, and the output was crop yields, forecast by a popular agricultural model. By dragging and dropping boxes and connecting them with arrows, FACE-IT users could design their own workflow for this analysis, then share it with the rest of the FACE-IT community.
The implications of this extend far beyond agriculture and may be useful for researchers or policymakers who want to investigate climate effects and predict future outcomes, without expert programming knowledge or access to powerful computation resources. Embracing this new platform provides more people access to climate data, facilitating and accelerating critical climate research.