Advanced Dashboarding#
This exercise is entirely freeform. Get into groups of 3-4 people (if available!) and start building a dashboard, with everything you have learned in this tutorial. By the end of the exercise you should have a dashboard that:
Uses datashading to render the whole dataset
Builds a pipeline using
pn.rx
Uses a widget to filter the data either using linked selections or using a widget (e.g. a RangeSlider)
Uses a widget to control some aspect of the styling of the plot (e.g. to select a colormap, color, or size)
Is servable by running
panel serve Advanced_Dashboarding.ipynb
in the exercise directory
import pathlib
import colorcet as cc # noqa
import holoviews as hv # noqa
import numpy as np # noqa
import pandas as pd
import panel as pn
import xarray as xr
import hvplot.pandas # noqa: API import
import hvplot.xarray # noqa: API import
pn.extension()
As a starting point we will load the data; everything else is up to you:
%%time
df = pd.read_parquet(pathlib.Path('../../data/earthquakes-projected.parq'))
CPU times: user 944 ms, sys: 96.3 ms, total: 1.04 s
Wall time: 557 ms
ds = xr.open_dataarray(pathlib.Path('../../data/raster/gpw_v4_population_density_rev11_2010_2pt5_min.nc'))
cleaned_ds = ds.where(ds.values != ds.nodatavals).sel(band=1)
cleaned_ds.name = 'population'
You don’t really know what to build? Here are some ideas:
Build a dashboard with a pipeline that filters the data on one or more of the columns (e.g. magnitude using a
RangeSlider
or time using aDateRangeSlider
) and then datashades itBuild a dashboard with multiple views of the data (e.g. longitude vs. latitude, magnitude vs. depth etc.) and cross-filters the data (see the Glaciers notebook for reference)
Build a dashboard that allows you to select multiple earthquakes and compute (and display) statistics on them.