Workflow using Future Climate API¶

Author
Samantha Ewers

WyAdapt Future Climate API Example¶

For this example, we provide some basic example workflows to accessing, querying and even downloading climate data available within the WyAdapt cyberinfrastructure. We use REST API’s to query for existing climate summary data that are stored within the ARCC S3 Pathfinder storage in Cloud Optimized GeoTiffs(COGs) format. Currently we have multiple GCM’s in which daily data have been aggregated to monthly and annual COGs.

Using Python with WyAdapt Climate Data¶

Within the WyAdapt.org application, you can find a complete listing of API Endpoints here: https://wyadapt.org/swagger/index.html

For any of the below examples we are using Python and have the below libraries installed in our environment. Once they are installed please import them.

In [ ]:
import xarray as xr
import geopandas as gpd
from shapely.geometry import mapping
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display, Markdown, HTML
import numpy as np
import cartopy.crs as ccrs
from matplotlib.colors import Normalize
from folium.raster_layers import ImageOverlay
from folium.plugins import FloatImage
import rasterio
import leafmap.foliumap as leafmap
import plotly.express as px
import plotly.graph_objects as go
import geoviews as gv
from rasterio.mask import mask
import plotly.io as pio 
import holoviews as hv
import requests
import rioxarray
import hvplot.xarray
import os
import wget

Let's take a look at some different aspects of the data, starting with the available gcms.

In [ ]:
#Get all available gcms:
    
gcm_url = "https://wyadapt.org/api/ClimateDataRepo/GcmRcpVariantCombinations"
# Make a GET request
response = requests.get(gcm_url)
gcm = response.json()    
# Load data into a DataFrame
gcm = pd.DataFrame(gcm)

#Selecting distinct rows and the necessary columns, then arranging by 'gcm'
gcm_selected = gcm[['gcm', 'variant', 'description']].drop_duplicates().sort_values(by='gcm')

gcm_selected.reset_index(drop=True, inplace=True)
display(gcm_selected)
gcm variant description
0 access-cm2 r5i1p1f1 Commonwealth Scientific and Industrial Researc...
1 canesm5 r1i1p2f1 Canadian Centre for Climate Modelling and Anal...
2 cesm2 r11i1p1f1 Community Earth System Model Contributors
3 cnrm-esm2-1 r1i1p1f2 Centre National de Recherches Météorologiques/...
4 ec-earth3 r1i1p1f1 EC-Earth consortium
5 ec-earth3-veg r1i1p1f1 EC-Earth consortium
6 ensemble p75 75th percentile across all GCM
7 ensemble p25 25th percentile across all GCM
8 ensemble p90 90th percentile across all GCM
9 ensemble p10 10th percentile across all GCM
10 ensemble p50 50th percentile across all GCM
11 fgoals-g3 r1i1p1f1 Chinese Academy of Sciences
12 giss-e2-1-g r1i1p1f2 Goddard Institute for Space Studies
13 miroc6 r1i1p1f1 Atmosphere and Ocean Research Institute (The U...
14 mpi-esm1-2-hr r7i1p1f1 Max Planck Institute for Meteorology
15 mpi-esm1-2-hr r3i1p1f1 Max Planck Institute for Meteorology
16 mpi-esm1-2-lr r7i1p1f1 Max Planck Institute for Meteorology
17 noresm2-mm r1i1p1f1 Norwegian Earth System Model
18 taiesm1 r1i1p1f1 Research Center for Environmental Changes, Aca...
19 ukesm1-0-ll r2i1p1f2 Met Office, Hadley Centre (UK)

As you can see there are a lot of GCMs. Let's check our available variables using our variables endpoint next.

In [ ]:
variable_url = "https://wyadapt.org/api/ClimateDataRepo/Variables"

# Send GET request
response = requests.get(variable_url)
variables = response.json() 
variables = pd.DataFrame(variables)
variables_selected = variables[['variable', 'alias', 'units', 'scaleFactor', 'description']].drop_duplicates()

display(variables_selected)
variable alias units scaleFactor description
0 prec Precipitation mm 0.01 Precipitation for monthly or annual
1 t2 Average Temperature °C 0.01 Average Temperature for monthly or annual
2 t2max Maximum Temperature °C 0.01 Maximum Temperature for monthly or annual
3 t2min Minimum Temperature °C 0.01 Minimum Temperature for monthly or annual
4 cdd Max Consecutive Days Precipitation <1mm/day days 1.00 Max number of consecutive days with precipitat...
5 cwd Max Consecutive Days Precipitation >1mm/day days 1.00 Max number of consecutive days with precipitat...
6 fd Frost Days days 1.00 Frost days: Annual number of days with Min tem...
7 id Icing Days days 1.00 Icing days: Annual number of days with Max tem...
8 r20mm Annual Count Precipitaiton >=20mm/day days 1.00 Annual count of days when precipitation >= 20m...
9 r95p Annual Total Precipitation When Daily >95th mm 0.01 Annual total precipitation when daily precip e...
10 rx1day Annual Max 1-day Precipitation mm 0.01 Rx1day - Annual maximum one-day precipitation ...
11 rx5day Annual Max 5-day Precipitation mm 0.01 Rx5day - Annual maximum five-day precipitation...
12 sdii Simple Precipitation Intensity Index mm 0.01 Simple Precipitation Intensity Index (Annual m...
13 snow Snow mm 0.01 Snow Water Equivalent monthly
14 snowApril SWE April 1st mm 0.01 Snow Water Equivalent on April 1st for given year
15 snowFeb SWE February 1st mm 0.01 Snow Water Equivalent on February 1st for give...
16 snowJan SWE January 1st mm 0.01 Snow Water Equivalent on January 1st for given...
17 snowJune SWE June 1st mm 0.01 Snow Water Equivalent on June 1st for given year
18 snowMar SWE March 1st mm 0.01 Snow Water Equivalent on March 1st for given year
19 snowMay SWE May 1st mm 0.01 Snow Water Equivalent on May 1st for given year
20 tnn Monthly Minimum of Daily Temperature °C 0.01 Monthly minimum of daily minimum temperature (...
21 txx Monthly Maximum of Daily Temperature °C 0.01 Monthly maximum of daily maximum temperature (...

Daily data are aggregated into different timescales, below lets you query what options are available.

In [ ]:
url_timescale = "https://wyadapt.org/api/ClimateDataRepo/Timescale"
response_timescale = requests.get(url_timescale)
timescales = response_timescale.json()  
markdown_text = "### Available Timescales:\n" + "\n".join(f"- {timescale}" for timescale in timescales)

# Display the Markdown
display(Markdown(markdown_text))

Available Timescales:¶

  • 30yearmonthlyrefdif
  • monthlyrefdif
  • annual
  • monthly
  • 30yearannual
  • 30yearmonthly
  • 30yearannualrefdif
  • annualrefdif

Let's get our available spatial resolutions.

In [ ]:
#Get resolution information
url_resolution = "https://wyadapt.org/api/ClimateDataRepo/Resolution"
response_resolution = requests.get(url_resolution)
data_resolution = response_resolution.json()
df_resolutions = pd.DataFrame(data_resolution)
display(df_resolutions)
gcm variant resolution description
0 access-cm2 r5i1p1f1 d02_9km 9 kilometer cell size
1 canesm5 r1i1p2f1 d02_9km 9 kilometer cell size
2 cesm2 r11i1p1f1 d02_9km 9 kilometer cell size
3 cnrm-esm2-1 r1i1p1f2 d02_9km 9 kilometer cell size
4 ec-earth3 r1i1p1f1 d02_9km 9 kilometer cell size
5 ec-earth3-veg r1i1p1f1 d02_9km 9 kilometer cell size
6 ensemble p10 d02_9km 9 kilometer cell size
7 ensemble p25 d02_9km 9 kilometer cell size
8 ensemble p50 d02_9km 9 kilometer cell size
9 ensemble p75 d02_9km 9 kilometer cell size
10 ensemble p90 d02_9km 9 kilometer cell size
11 fgoals-g3 r1i1p1f1 d02_9km 9 kilometer cell size
12 giss-e2-1-g r1i1p1f2 d02_9km 9 kilometer cell size
13 miroc6 r1i1p1f1 d02_9km 9 kilometer cell size
14 mpi-esm1-2-hr r3i1p1f1 d02_9km 9 kilometer cell size
15 mpi-esm1-2-hr r7i1p1f1 d02_9km 9 kilometer cell size
16 mpi-esm1-2-lr r7i1p1f1 d02_9km 9 kilometer cell size
17 noresm2-mm r1i1p1f1 d02_9km 9 kilometer cell size
18 taiesm1 r1i1p1f1 d02_9km 9 kilometer cell size
19 ukesm1-0-ll r2i1p1f2 d02_9km 9 kilometer cell size

Now that we have an idea of the available datasets, let’s learn how to interact with them given that there are several methods!

Since the climate data are cloud optimized geotiffs (COGs), we can query them spatially from the cloud and not have to worry about downloading files. To do this let’s first add some polygons to our environment. We will first take a look a the Counties of Wyoming from the ESRI Feature Service. Notice that we add an ID column that matches row.names() of the layer. We will use this later on in our processing.

In [ ]:
#Get Wyoming counties
base_url = "https://services.arcgis.com/P3ePLMYs2RVChkJx/ArcGIS/rest/services/USA_Boundaries_2022/FeatureServer/2/query"
params = {
    'where': "STATE_FIPS='56'",
    'outFields': '*',
    'returnGeometry': 'true',
    'f': 'geojson'
}
response = requests.get(base_url, params=params)

# Load the GeoJSON response into a GeoDataFrame
counties = gpd.GeoDataFrame.from_features(response.json())
counties['ID'] = counties.index.astype(int)
# Plotting the counties
fig, ax = plt.subplots(figsize=(10, 8))
counties.plot(ax=ax, color='#d3d3d3', edgecolor='black')  # Change color to a darker gray

# Customizing the plot
ax.set_title('Wyoming Counties')
ax.set_xlim([-111.0546, -104.0523])
ax.set_ylim([40.99478, 45.00582])
plt.show()
No description has been provided for this image

Let’s look at the ensemble of Annual Mean Temperature for Albany County for the entire timeseries. First we need to obtain the URLs for the specified variable and timescale (notice the query parameters within the URL).

In [ ]:
response = requests.get("https://wyadapt.org/api/ClimateDataRepo/ClimateUrl?timescale=annual&gcm=ensemble&variant=p50&variable=t2")
data = response.json()

df = pd.json_normalize(data)

df = df.drop(columns=["document.id", "..JSON"], errors="ignore")


df['urlCOG'] = "/vsicurl/" + df['url']

display(df)
url filename resolution biascorrected timescale gcm rcp variable variant years sampMonth scale units alias statistics urlCOG
0 https://pathfinder.arcc.uwyo.edu/wyadapt/d02_9... t2_ensemble.p50_ssp370_BC_d02_1981-2099_annual... d02_9km True annual ensemble ssp370 t2 p50 1981-2099 13 0.01 °C Average Temperature [-620, 3186, 1274.2671, 604.6369] /vsicurl/https://pathfinder.arcc.uwyo.edu/wyad...

Now let's use our mean temperature (t2) url to get the temperature for Albany County. You will notice our output has 119 rows representing 1981-1999.

In [ ]:
mean_values = []
years = []

# Example usage for Albany County
albany_county = counties[counties['FIPS'] == '56001']
albany_county_geom = [mapping(albany_county['geometry'].iloc[0])]

# Open and process each raster
for url in df['urlCOG']:
    with rasterio.open(url) as src:
        # Extract descriptions (band names) inside the 'with' block
        if src.descriptions:
            years += [desc.split('_')[-1] for desc in src.descriptions]
        
        # Clip the raster to Albany County
        clipped, clipped_transform = mask(src, albany_county_geom, crop=True)
        
        # Replace NoData values with NaN
        clipped = np.where(clipped == src.nodata, np.nan, clipped)
        
        # Calculate the mean for the clipped raster and add scaling factor
        mean_value = np.nanmean(clipped, axis=(1, 2))*0.01  # mean across spatial dimensions
        mean_values.append(mean_value)
# Round mean values
mean_values = np.round(mean_values, 2)
# Create a DataFrame to store the results
mean_df = pd.DataFrame({
    'Year': years,  # Years from the band names
    'Mean_Temperature': np.concatenate(mean_values)  # Mean values flattened
})

# Display the result
print(mean_df)
     Year  Mean_Temperature
0    1981              3.42
1    1982              3.93
2    1983              3.19
3    1984              3.25
4    1985              3.69
..    ...               ...
114  2095              9.31
115  2096              9.15
116  2097              9.21
117  2098              9.43
118  2099              9.42

[119 rows x 2 columns]
     Year  Mean_Temperature
0    1981              3.42
1    1982              3.93
2    1983              3.19
3    1984              3.25
4    1985              3.69
..    ...               ...
114  2095              9.31
115  2096              9.15
116  2097              9.21
117  2098              9.43
118  2099              9.42

[119 rows x 2 columns]

Let’s visualize the results:

In [ ]:
# Set Plotly renderer to display inside the notebook
pio.renderers.default = 'notebook_connected'

# Convert the 'Year' column to integers, if needed
mean_df['Year'] = mean_df['Year'].astype(int)

# Plotting the data with Plotly
fig = go.Figure()

fig.add_trace(go.Scatter(
    x=mean_df['Year'],
    y=mean_df['Mean_Temperature'],
    mode='lines',
    name='Albany County',
    hovertemplate='Year: %{x}<br>Temperature: %{y} °C<extra>Albany County</extra>',
    line=dict(color='red')
))

# Customize the layout
fig.update_layout(
    title='Average Temperature Over Time for Albany County',
    xaxis_title='Year',
    yaxis_title='Average Temperature (°C)',
    xaxis=dict(tickmode='array', tickvals=mean_df['Year'][::10], ticktext=mean_df['Year'][::10]),  # Display every 10th year
    hovermode='x unified',
    showlegend=True
)

# Display the plot
fig.show()

# Optional: Export the plot to HTML and display it
plot_html = fig.to_html(full_html=False, include_plotlyjs='cdn')
display(HTML(plot_html))

Next we will expand our workflow to include all counties.

In [ ]:
pio.renderers.default = 'notebook_connected'

# Initialize a list to store the results
results = []

# Extract years from the first raster only
with rasterio.open(df['urlCOG'][0]) as src:
    years = [desc.split('_')[-1] for desc in src.descriptions]  # Extract years from band descriptions

# Loop through each county
for index, county in counties.iterrows():
    county_geom = [mapping(county['geometry'])]  # Get the geometry for each county
    
    mean_values = []

    # Open the first COG (since all rasters should have the same bands/years)
    with rasterio.open(df['urlCOG'][0]) as src:
        for band_index in range(1, src.count + 1):  # Loop over each band (starting from 1)

            # Clip the raster to the county's geometry
            clipped, clipped_transform = mask(src, county_geom, crop=True, indexes=band_index)

            # Replace NoData values with NaN
            clipped = np.where(clipped == src.nodata, np.nan, clipped)

            # Calculate the mean value for the clipped raster
            mean_value = np.nanmean(clipped) * 0.01  # Apply scaling of 0.01
            mean_values.append(mean_value)


    # Store the mean values and years for the county
    mean_value_df = pd.DataFrame({
        'Year': np.array(years).flatten(),  # Ensure it's a 1D array
        'Mean_Temperature': np.round(np.array(mean_values).flatten(), 2),  # Round to 2 decimal places
        'County': county['NAME']  # Add the county name
    })
    results.append(mean_value_df)

# Combine all county results into a single DataFrame
all_data = pd.concat(results)

# Plotting the results using Plotly
fig = go.Figure()

# Add a line for each county
for name, group in all_data.groupby('County'):
    fig.add_trace(go.Scatter(
        x=group['Year'],
        y=group['Mean_Temperature'],
        mode='lines',
        name=name,
        hovertemplate='County: %{text}<br>Year: %{x}<br>Temperature: %{y:.2f} °C<extra></extra>',
        text=name  # Text information (county name) for the hover template
    ))

# Update the layout to show ticks every 30 years
tickvals = list(range(int(all_data['Year'].min()), int(all_data['Year'].max()) + 1, 30))

fig.update_layout(
    title='Annual Average Temperature for Wyoming Counties',
    xaxis_title='Year',
    yaxis_title='Average Temperature (°C)',
    xaxis=dict(
        tickmode='array',
        tickvals=tickvals,
        ticktext=tickvals,
        tickangle=45
    ),
    hovermode='closest',
    showlegend=True
)

# Display the plot
fig.show()
plot_html = fig.to_html(full_html=False, include_plotlyjs='cdn')
display(HTML(plot_html))

This allows us to look at the median temperatures through time for each county. But we know there are more than one GCM, hence the ensemble of the models. Perhaps it might be a good idea to incorporate model variance….luckily for us, these data are available in which the ensemble dataset has the 10th, 25th, 50th, 75th and 90th percentiles calculated. Let’s take advantage of this, using our Albany County example. In order to do this we need to expand our query to get the other urls.

In [ ]:
response = requests.get("https://wyadapt.org/api/ClimateDataRepo/ClimateUrl?variable=t2&timescale=annual&gcm=ensemble")
data = response.json()

df = pd.json_normalize(data)

df = df.drop(columns=["document.id", "..JSON"], errors="ignore")


df['urlCOG'] = "/vsicurl/" + df['url']

display(df)
url filename resolution biascorrected timescale gcm rcp variable variant years sampMonth scale units alias statistics urlCOG
0 https://pathfinder.arcc.uwyo.edu/wyadapt/d02_9... t2_ensemble.p50_ssp370_BC_d02_1981-2099_annual... d02_9km True annual ensemble ssp370 t2 p50 1981-2099 13 0.01 °C Average Temperature [-620, 3186, 1274.2671, 604.6369] /vsicurl/https://pathfinder.arcc.uwyo.edu/wyad...
1 https://pathfinder.arcc.uwyo.edu/wyadapt/d02_9... t2_ensemble.p10_ssp370_BC_d02_1981-2099_annual... d02_9km True annual ensemble ssp370 t2 p10 1981-2099 13 0.01 °C Average Temperature [-738, 3063, 1171.6478, 614.0961] /vsicurl/https://pathfinder.arcc.uwyo.edu/wyad...
2 https://pathfinder.arcc.uwyo.edu/wyadapt/d02_9... t2_ensemble.p25_ssp370_BC_d02_1981-2099_annual... d02_9km True annual ensemble ssp370 t2 p25 1981-2099 13 0.01 °C Average Temperature [-692, 3105, 1218.9653, 610.4095] /vsicurl/https://pathfinder.arcc.uwyo.edu/wyad...
3 https://pathfinder.arcc.uwyo.edu/wyadapt/d02_9... t2_ensemble.p75_ssp370_BC_d02_1981-2099_annual... d02_9km True annual ensemble ssp370 t2 p75 1981-2099 13 0.01 °C Average Temperature [-581, 3269, 1335.1138, 600.961] /vsicurl/https://pathfinder.arcc.uwyo.edu/wyad...
4 https://pathfinder.arcc.uwyo.edu/wyadapt/d02_9... t2_ensemble.p90_ssp370_BC_d02_1981-2099_annual... d02_9km True annual ensemble ssp370 t2 p90 1981-2099 13 0.01 °C Average Temperature [-551, 3331, 1387.0731, 599.6212] /vsicurl/https://pathfinder.arcc.uwyo.edu/wyad...

Let's plot the different percentiles for ensemble for Albany County.

In [ ]:
pio.renderers.default = 'notebook_connected'

albany_county = counties[counties['FIPS'] == '56001']
albany_county_geom = [mapping(albany_county['geometry'].iloc[0])]

# Function to process COG dataset (based on the earlier processing)
def process_cog_dataset(url, county_geom):
    with rasterio.open(url) as src:
        years = [desc.split('_')[-1] for desc in src.descriptions]
        clipped, clipped_transform = mask(src, county_geom, crop=True)
        clipped = np.where(clipped == src.nodata, np.nan, clipped)
        mean_value = np.nanmean(clipped, axis=(1, 2)) * 0.01  # Apply scaling factor
        return mean_value,years

# Process each percentile dataset for Albany County
percentile_data = {}
all_years = None
for _, row in df.iterrows():
    url = row['urlCOG']
    variant = row['variant']
    mean_value, years = process_cog_dataset(url, albany_county_geom)
    
    percentile_data[variant] = mean_value
    if all_years is None:
        all_years = years  # Keep track of the years only once

# Combine the data into a DataFrame
combined_data = pd.DataFrame(percentile_data)
combined_data['time'] = all_years


# Plotting with Plotly
fig = go.Figure()

# Add fill for p25 to p75 with hover info
fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p75'],
    mode='lines',
    line=dict(width=0),
    showlegend=False,
    hoverinfo='skip'
))

fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p25'],
    mode='lines',
    fill='tonexty',
    fillcolor='rgba(128, 128, 128, 0.5)',
    line=dict(width=0),
    showlegend=True,
    name='p25-p75',
    hovertemplate='Year: %{x}<br>Temperature: %{y:.2f} °C (p25)<extra></extra>'
))

fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p75'],
    mode='lines',
    fill='tonexty',
    fillcolor='rgba(128, 128, 128, 0.5)',
    line=dict(width=0),
    showlegend=False,
    hovertemplate='Year: %{x}<br>Temperature: %{y:.2f} °C (p75)<extra></extra>'
))

# Add fill for p10 to p90 with hover info
fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p90'],
    mode='lines',
    line=dict(width=0),
    showlegend=False,
    hoverinfo='skip'
))

fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p10'],
    mode='lines',
    fill='tonexty',
    fillcolor='rgba(128, 128, 128, 0.3)',
    line=dict(width=0),
    showlegend=True,
    name='p10-p90',
    hovertemplate='Year: %{x}<br>Temperature: %{y:.2f} °C (p10)<extra></extra>'
))

fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p90'],
    mode='lines',
    fill='tonexty',
    fillcolor='rgba(128, 128, 128, 0.3)',
    line=dict(width=0),
    showlegend=False,
    hovertemplate='Year: %{x}<br>Temperature: %{y:.2f} °C (p90)<extra></extra>'
))

# Add median line
fig.add_trace(go.Scatter(
    x=combined_data['time'],
    y=combined_data['p50'],
    mode='lines',
    name='Median (p50)',
    hovertemplate='Year: %{x}<br>Temperature: %{y:.2f} °C<extra>Variant: p50</extra>',
    line=dict(color='red')
))

# Update layout to show ticks every 30 years
tickvals = list(range(int(combined_data['time'].min()), int(combined_data['time'].max()) + 1, 30))

fig.update_layout(
    title='Annual Average Temperature for Albany County (Percentiles)',
    xaxis_title='Year',
    yaxis_title='Average Temperature (°C)',
    xaxis=dict(
        tickmode='array',
        tickvals=tickvals,
        ticktext=tickvals,
        tickangle=45,
        showgrid=True, 
        gridcolor='lightgray'
    ),
    yaxis=dict(
        showgrid=True, 
        gridcolor='lightgray'
    ),
    hovermode='closest',
    showlegend=True,
    plot_bgcolor='white'
)

fig.show()

# Save the plot as an HTML string with all resources embedded inline
plot_html = fig.to_html(full_html=False, include_plotlyjs='cdn')

# Display the HTML content directly in the notebook
display(HTML(plot_html))

We can map the temperature for the entire extent for one year if we want.

In [ ]:
hv.extension('bokeh', 'matplotlib') 

pio.renderers.default = 'notebook_connected'

def open_cog_band(url, band, nodata_value=-2147483648):
    vsicurl_path = url
    
    # Open the dataset and extract the 44th band
    with rasterio.open(vsicurl_path) as src:
        data = src.read(band)  # Band 44 corresponds to the year 2024
        
        # Ditch nan data
        data = np.where(data == nodata_value, np.nan, data)
        # Scale Data
        data = data * 0.01
        # Convert to xarray.DataArray for easier handling
        data_array = xr.DataArray(
            data,
            dims=("y", "x"),
            coords={"y": np.linspace(src.bounds.top, src.bounds.bottom, data.shape[0]), 
                    "x": np.linspace(src.bounds.left, src.bounds.right, data.shape[1])}
        )
        # Add CRS information for plotting
        data_array.rio.write_crs(src.crs, inplace=True)
    
    return data_array


url_2024 = df['urlCOG'][0]  # Grabbing the 1st URL which in this case in 50 percentile

band_2024 = 44  # Band index for 2024
data_2024 = open_cog_band(url_2024, band_2024)

# Use hvplot to create the image plot directly with CRS handling
temperature_plot = data_2024.hvplot.image(
    x='x', y='y', crs=ccrs.PlateCarree(),
    cmap='inferno', width=500, height=400, colorbar=True,
    title='2024 Mean Temperature (°C)',
    geo=True  # Treat data as geospatial
).opts(xaxis=None, yaxis=None)

# Overlay the plot on a basemap (OSM tiles)
basemap = gv.tile_sources.OSM()

# Combine the basemap and the temperature plot
final_plot = basemap * temperature_plot

# Display the plot
display(final_plot)
No description has been provided for this image No description has been provided for this image No description has been provided for this image

Download WyAdapt Future Climate Cloud Optimized Geotiffs (COGs) Files¶

If you really want, you can download the COGs files directly from the S3 to your personal machine. Below is an example for how to use our API to access the urls.

In [ ]:
def download_cogs(api_url, download_dir):
    """
    Download all COG files from the specified API URL and save them to the local directory.
    
    Parameters:
    - api_url (str): The API URL to fetch COGs metadata.
    - download_dir (str): Local directory to save downloaded COGs.
    
    """
    try:
        # Fetch data from the API
        response = requests.get(api_url, verify=False)
        response.raise_for_status()  # Check if the request was successful
        
        # Parse the JSON response
        data = response.json()
        
        # Normalize the data into a DataFrame
        df = pd.json_normalize(data)
        
        # Drop unnecessary columns
        df = df.drop(columns=["document.id", "..JSON"], errors="ignore")
        
        # Add '/vsicurl/' prefix to URLs
        df['urlCOG'] = df['url']
        
        # Ensure the download directory exists
        if not os.path.exists(download_dir):
            os.makedirs(download_dir)
        
        # Download each COG file
        for url in df['urlCOG']:
            try:
                # Extract the filename from the URL
                filename = url.split("/")[-1]
                
                # Create the local path to save the file
                local_path = os.path.join(download_dir, filename)
                
                # Download the file
                wget.download(url, local_path)
            except Exception as e:
                print(f"Error downloading {url}: {e}")
    
    except requests.RequestException as e:
        print(f"Failed to fetch data from the API: {e}")

api_url = "https://wyadapt.org/api/ClimateDataRepo/ClimateUrl?timescale=annual&gcm=ensemble&variable=t2"

# Directory where you want to download the COG files
download_directory = "D:/Projects/WYACT/cogs/test"

# Call the function to download all the COGs
download_cogs(api_url, download_directory)