Skip to content

Commit

Permalink
TU Delft separate out standard of protection (os-climate#105)
Browse files Browse the repository at this point in the history
* TU Delft separate out standard of protection

Signed-off-by: Joe Moorhouse <[email protected]>

* Lint

Signed-off-by: Joe Moorhouse <[email protected]>

* Chore: pre-commit autoupdate

---------

Signed-off-by: Joe Moorhouse <[email protected]>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
joemoorhouse and pre-commit-ci[bot] authored Sep 1, 2024
1 parent f8df905 commit 3bcc181
Show file tree
Hide file tree
Showing 7 changed files with 231 additions and 71 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -190,3 +190,6 @@ tests/test_output/*

# Cython debug symbols
cython_debug/

# Tests (test_output for files created in running tests but not to be committed, e.g. not expected results)
tests/test_output/*
10 changes: 5 additions & 5 deletions src/hazard/onboard/tudelft_flood.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
Flood depth for riverine floods occurring in Europe in the present and future climate, including protection levels
from the FLOPROS database.
Unprotected flood depth for riverine floods occurring in Europe in the present and future climate.
Based on the CLMcom-CCLM4-8-17-EC-EARTH regional climate simulation (EURO-CORDEX), sets are available for RCP 4.5 and RCP 8.5
for projected (central) years 2035 and 2085. The set has a spatial resolution of 100m and comprises return periods of
for projected (central) years 2035 and 2085. The set has a spatial resolution of 100 m and comprises return periods of
10, 30, 100, 300 and 1000 years.

The data set is [here](https://data.4tu.nl/datasets/df7b63b0-1114-4515-a562-117ca165dc5b), part of the
[RAIN data set](https://data.4tu.nl/collections/1e84bf47-5838-40cb-b381-64d3497b3b36)
(Risk Analysis of Infrastructure Networks in Response to Extreme Weather). The RAIN report is
[here](http://rain-project.eu/wp-content/uploads/2016/09/D2.5_REPORT_final.pdf).

To derive this indicator, 'River_flood_extent_X_Y_Z_with_protection' and 'River_flood_depth_X_Y_Z_R' data sets are combined to
yield protected flood depths.
Unprotected flood depths are combined from the 'River_flood_depth_X_Y_Z_R' data sets.
Standard of protection data is separately available using the 'River_flood_extent_X_Y_Z_with_protection' data set
(which makes use of the FLOPROS database), which provides protected flood extent.
193 changes: 148 additions & 45 deletions src/hazard/onboard/tudelft_flood.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,13 +81,35 @@ def __init__(self, source_dir: str, fs: Optional[AbstractFileSystem] = None):
}
self.zip_url = "https://data.4tu.nl/file/df7b63b0-1114-4515-a562-117ca165dc5b/5e6e4334-15b5-4721-a88d-0c8ca34aee17"

self._resource = list(self.inventory())[0]
"""
This is a comment on the understanding of the data that underlies the processing herein.
The files comprise flood depth and flood extent, protected and unprotected.
Flood extent is understood to contain the minimum return period for which a flood can occur.
This implies that *unprotected* flood extent is simply inferred from flood depth, presenting the
minimum return period for which flood depth is non-zero. This can be (and was for some examples) checked, e.g. for
pixels where the 30 year return flood depth is non-zero and the 10 year return flood depth is zero, which
should then have a 30 year flood extent. Protected flood extent is defined in the same way, but some caution required.
The data set takes into account a minimum and maximum standard of protection (SoP) from FLOPROS as shown
in Fig 3.2 of http://rain-project.eu/wp-content/uploads/2016/09/D2.5_REPORT_final.pdf
The protected extent (assuming flood depth curve all non-zero) would then be the maximum SoP. In the UK,
for example, the SoP is between 100 years and 300 years. It it tempting to set protected flood depths which are less
than the maximum SoP to zero, however this will not give us the behaviour we want in calculations.
Say we have 30, 100, 300 and 1000 year flood depths of 0.2, 0.4, 0.5 and 0.6m and we know the SoP is between 100 years and 300 years.
Unprotected we would have a probability of (1/100 - 1/300) of a depth between 0.4 and 0.5m and (1/300 - 1/1000) of a depth between 0.5 and 0.6m.
Protected, we either want to set our probability of depth between 0.4 and 0.5m to zero or - more likely - some
lower value to reflect the uncertainty of the SoP. In neither case do we get the desired result by setting the flood depth to zero.
In summary, we think it's best to provide data sets of
- unprotected depth
- SoP
"""

self._depth_resource, self._sop_resource = self.inventory()

def batch_items(self) -> Iterable[BatchItem]:
return [
BatchItem(
scenario="historical",
central_year=1971,
central_year=1985,
flood_depth_filename="River_flood_depth_1971_2000_hist_{return_period}.tif",
extent_protected_filename="River_flood_extent_1971_2000_hist_with_protection.tif",
),
Expand Down Expand Up @@ -143,78 +165,125 @@ def run_single(
assert target is None or isinstance(target, OscZarr)
shape = [
39420,
38375,
38374,
] # y, x not all returns have same size (first one smaller at 38371)
# not all return periods have the same size. We pick the
i, return_period = 1, self.return_periods[1]
full_path_depth = str(full_path_depth_format).format(
return_period=self.return_period_str[return_period]
)
with self.fs.open(full_path_depth, "rb") as fd:
da_depth = xr.open_rasterio(fd).isel(band=0)
coords_x, coords_y = np.array(da_depth.x), np.array(da_depth.y)

# only create SoP for historical
if item.scenario == "historical":
with self.fs.open(full_path_extent, "rb") as fe:
da_sop = xr.open_rasterio(fe).isel(band=0)
# bounds = da.rio.bounds()
z_sop = target.create_empty(
self._sop_resource.path.format(
scenario=item.scenario, year=item.central_year
),
shape[1],
shape[0],
da_sop.rio.transform(),
str(da_sop.crs),
index_name="standard of protection (years)",
indexes=["min", "max"],
)
values_max_sop = np.array(da_sop.data, dtype="float32")
sop_no_data = da_sop.attrs["nodatavals"]
values_max_sop[values_max_sop == sop_no_data] = float("nan")
values_min_sop = self._get_mins(values_max_sop)
z_sop[0, 0 : len(da_sop.y), 0 : len(da_sop.x)] = values_min_sop[:, :]
z_sop[1, 0 : len(da_sop.y), 0 : len(da_sop.x)] = values_max_sop[:, :]
del values_min_sop, values_max_sop

for i, return_period in enumerate(self.return_periods):
full_path_depth = str(full_path_depth_format).format(
return_period=self.return_period_str[return_period]
)
with self.fs.open(full_path_depth, "rb") as fd:
dad = xr.open_rasterio(fd).isel(band=0)
with self.fs.open(full_path_extent, "rb") as fe:
dae = xr.open_rasterio(fe).isel(band=0)
# bounds = da.rio.bounds()
if return_period == self.return_periods[0]:
z = target.create_empty(
self._resource.path.format(
scenario=item.scenario, year=item.central_year
),
shape[1],
shape[0],
dad.rio.transform(),
str(dad.crs),
indexes=self.return_periods,
)
# dad_nodata = 65535
if (
dad.shape[1] == 38371
): # corrections for various possible errors whereby coordinates are missing for certain files:
dae = dae[:, 0:38371]
if dae.shape[1] == 38375:
dae = dae[:, 0:38374]
if dad.shape[1] == 38375:
dad = dad[:, 0:38374]
if dae.shape[0] == 39385:
dad = dad[35:, :]

da_depth = xr.open_rasterio(fd).isel(band=0)
if return_period == self.return_periods[0]:
z_depth = target.create_empty(
self._depth_resource.path.format(
scenario=item.scenario, year=item.central_year
),
shape[1],
shape[0],
da_depth.rio.transform(),
str(da_depth.crs),
indexes=self.return_periods,
)
if da_depth.shape[1] == 38375:
da_depth = da_depth[:, 0:38374]
# not quite the same coordinates: check if close, within rounding error, and align exactly
assert np.abs(np.array(dae.x) - np.array(dad.x)).max() < 1e-4
assert np.abs(np.array(dae.y) - np.array(dad.y)).max() < 1e-4
dae = dae.assign_coords({"x": dad.x, "y": dad.y})
da_combined = xr.where(dae <= return_period, dad, 0)
values = da_combined.data
depth_no_data = dad.attrs["nodatavals"]
values[values == depth_no_data] = float("nan")
z[i, 0 : len(da_combined.y), 0 : len(da_combined.x)] = values[:, :]
lenx, leny = (
min(len(da_depth.x), len(coords_x)),
min(len(da_depth.y), len(coords_y)),
)
assert (
np.abs(np.array(da_depth.x[0:lenx]) - coords_x[0:lenx]).max() < 1e-4
)
assert (
np.abs(np.array(da_depth.y[0:leny]) - coords_y[0:leny]).max() < 1e-4
)
# da_depth = da_depth.assign_coords({"x": da_depth.x, "y": da_depth.y})
values_depth = da_depth.data
depth_no_data = da_depth.attrs["nodatavals"]
values_depth[values_depth == depth_no_data] = float("nan")
z_depth[i, 0 : len(da_depth.y), 0 : len(da_depth.x)] = values_depth[
:, :
]
del values_depth

def _get_mins(self, maxes: np.ndarray):
mins = np.empty_like(maxes)
for min, max in [
(2, 10),
(10, 30),
(30, 100),
(100, 300),
(300, 1000),
(1000, 10000),
]:
mins = np.where(maxes == max, min, mins)
return mins

def create_maps(self, source: OscZarr, target: OscZarr):
"""
Create map images.
"""
...
create_tiles_for_resource(source, target, self._resource, max_zoom=10)
create_tiles_for_resource(source, target, self._depth_resource, max_zoom=10)

def inventory(self) -> Iterable[HazardResource]:
"""Get the (unexpanded) HazardModel(s) that comprise the inventory."""
with open(
os.path.join(os.path.dirname(__file__), "tudelft_flood.md"), "r"
) as f:
description = f.read()
with open(
os.path.join(os.path.dirname(__file__), "tudelft_flood_sop.md"), "r"
) as f:
description_sop = f.read()
return [
HazardResource(
hazard_type="RiverineInundation",
indicator_id="flood_depth",
indicator_model_id="tudelft",
indicator_model_gcm="CLMcom-CCLM4-8-17-EC-EARTH",
path="inundation/river_tudelft/v2/flood_depth_{scenario}_{year}",
path="inundation/river_tudelft/v2/flood_depth_unprot_{scenario}_{year}",
params={},
display_name="Flood depth (TUDelft)",
description=description,
group_id="",
display_groups=[],
map=MapInfo( # type: ignore[call-arg] # has a default value for bbox
bounds=[],
map=MapInfo(
bbox=[],
bounds=[],
colormap=Colormap(
max_index=255,
min_index=1,
Expand All @@ -225,16 +294,50 @@ def inventory(self) -> Iterable[HazardResource]:
units="metres",
),
index_values=None,
path="maps/inundation/river_tudelft/v2/flood_depth_{scenario}_{year}_map",
path="maps/inundation/river_tudelft/v2/flood_depth_unprot_{scenario}_{year}_map",
source="map_array_pyramid",
),
units="metres",
scenarios=[
Scenario(id="historical", years=[1971]),
Scenario(id="historical", years=[1985]),
Scenario(id="rcp4p5", years=[2035, 2085]),
Scenario(id="rcp8p5", years=[2035, 2085]),
],
)
),
HazardResource(
hazard_type="RiverineInundation",
indicator_id="flood_sop",
indicator_model_id="tudelft",
indicator_model_gcm="CLMcom-CCLM4-8-17-EC-EARTH",
path="inundation/river_tudelft/v2/flood_sop_{scenario}_{year}",
params={},
display_name="Standard of protection (TUDelft)",
description=description_sop,
group_id="",
display_groups=[],
map=MapInfo(
bbox=[],
bounds=[],
colormap=Colormap(
max_index=255,
min_index=1,
nodata_index=0,
name="flare",
min_value=0.0,
max_value=1500.0,
units="years",
),
index_values=None,
path="maps/inundation/river_tudelft/v2/flood_sop_{scenario}_{year}_map",
source="map_array_pyramid",
),
units="years",
scenarios=[
Scenario(id="historical", years=[1985]),
Scenario(id="rcp4p5", years=[2035, 2085]),
Scenario(id="rcp8p5", years=[2035, 2085]),
],
),
]


Expand Down
17 changes: 17 additions & 0 deletions src/hazard/onboard/tudelft_flood_sop.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
Maximum and minimum standards of protection for riverine floods occurring in Europe in the present and future climate.
This is derived from a data set of protected flood extent i.e. the minimum return period for which flood depth is non-zero.

The data set is [here](https://data.4tu.nl/datasets/df7b63b0-1114-4515-a562-117ca165dc5b), part of the
[RAIN data set](https://data.4tu.nl/collections/1e84bf47-5838-40cb-b381-64d3497b3b36)
(Risk Analysis of Infrastructure Networks in Response to Extreme Weather). The RAIN report is
[here](http://rain-project.eu/wp-content/uploads/2016/09/D2.5_REPORT_final.pdf).

The 'River_flood_extent_X_Y_Z_with_protection' data set is obtained by applying FLOPROS database flood protection standards
on top of a data set of unprotected flood depth. That is, it provides information about flood protection for regions susceptible
to flood. This data set is not simply based on FLOPROS database therefore.

A minimum and maximum is provided to capture uncertainty in flood protection which vulnerability models may choose to take into
account. The protection bands are those of FLOPROS: 2–10 years, 10–30 years, 30–100 years, 100–300 years, 300–1000 years, 1000–10000 years.

As an example, if a region has a protected extent of 300 years, this is taken to imply that the area is protected against flood within the
bounds 100-300 years. It is then a modelling choice as to whether protection is taken from the 100 or 300 year flood depth.
1 change: 1 addition & 0 deletions src/hazard/onboard/wri_aqueduct_flood.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@

The World Resources Institute (WRI) [Aqueduct Floods model](https://www.wri.org/aqueduct) is an acute riverine and coastal flood hazard model with a spatial resolution of 30 × 30 arc seconds (approx. 1 km at the equator). Flood intensity is provided as a _return period_ map: each point comprises a curve of inundation depths for 9 different return periods (also known as reoccurrence periods). If a flood event has depth $d_i$ with return period of $r_i$ this implies that the probability of a flood event with depth greater than $d_i$ occurring in any one year is $1 / r_i$; this is the _exceedance probability_. Aqueduct Floods is based on Global Flood Risk with IMAGE Scenarios (GLOFRIS); see [here](https://www.wri.org/aqueduct/publications) for more details.

For more details and relevant citations see the
Expand Down
18 changes: 15 additions & 3 deletions src/hazard/sources/osc_zarr.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ def create_empty(
transform: Affine,
crs: str,
overwrite=False,
index_name: Optional[str] = None,
indexes=[0],
chunks=None,
):
Expand All @@ -77,6 +78,7 @@ def create_empty(
transform,
str(crs),
overwrite,
index_name=index_name,
indexes=indexes,
chunks=chunks,
)
Expand Down Expand Up @@ -246,6 +248,7 @@ def _zarr_create(
transform: Affine,
crs: str,
overwrite=False,
index_name: Optional[str] = None,
indexes=None,
chunks=None,
):
Expand Down Expand Up @@ -280,11 +283,18 @@ def _zarr_create(
"int64",
]:
indexes = [int(i) for i in indexes]
self._add_attributes(z.attrs, transform, crs, indexes)
self._add_attributes(
z.attrs, transform, crs, index_name=index_name, indexes=indexes
)
return z

def _add_attributes(
self, attrs: Dict[str, Any], transform: Affine, crs: str, indexes=None
self,
attrs: Dict[str, Any],
transform: Affine,
crs: str,
index_name: Optional[str] = None,
indexes=None,
):
trans_members = [
transform.a,
Expand All @@ -303,7 +313,9 @@ def _add_attributes(
)
if indexes is not None:
attrs["index_values"] = list(indexes)
attrs["index_name"] = "return period (years)"
attrs["index_name"] = (
index_name if index_name is not None else "return period (years)"
)

@staticmethod
def normalize_dims(da: xr.DataArray) -> xr.DataArray:
Expand Down
Loading

0 comments on commit 3bcc181

Please sign in to comment.