Beginner workflow

This tutorial is a beginner workflow for processing data, visualising the object store and retrieving and visualising data.

Check installation

This tutorial assumes that you have installed openghg. To ensure install has been successful you can open an ipython console and import openghg

In a terminal type

ipython

Then import openghg and print the version string associated with the version you have installed. If you get something like the below openghg is installed correctly.

In [1]: import openghg
In [2]: openghg.__version__
Out[2]: '0.0.1'

If you get an ImportError please go back to the install section of the documentation.

Notebooks

If you haven’t used Jupyter notebooks before please see this introduction.

1. Setting up our environment

First the notebook sets up the environment needed to create the object store at our desired location. By default this location is at /tmp/openghg_store. For the purposes of this tutorial this path is fine but as it is a temporary directory it may not survive a reboot of the computer.

If you want to create an object store that survives a reboot you can change the path to anything you like. We recommened a path such as ~/openghg_store which will create the object store in your home directory in a directory called openghg_store.

[1]:
import sys
sys.path.insert(0, "/home/gar/Documents/Devel/RSE/openghg")

from openghg.modules import ObsSurface
from openghg.objectstore import visualise_store
from openghg.localclient import get_obs_surface, RankSources



import glob
from pathlib import Path
import os
import tempfile

Set an environment variable for the OpenGHG object store

Here we create a temporary directory but you can use any folder you like by setting a path in place of tmp_dir.name. The object store created by this notebook will only have a lifetime as long as the notebook, if you want to create a longer lived object store set a path below.

[2]:
tmp_dir = tempfile.TemporaryDirectory()
os.environ["OPENGHG_PATH"] = tmp_dir.name # "/tmp/openghg_store"

2. Processing data

First we want to process some files from our local directory

[3]:
decc_file = "../data/DECC/bsd.picarro.1minute.248m.dat"

We can pass this filepath to ObsSurface.read_file. We must also tell it the type of data we want it to process, DECC data is CRDS. We also pass in the site code and the name of the network.

[4]:
decc_results = ObsSurface.read_file(filepath=decc_file, data_type="CRDS", site="bsd", network="DECC")
Processing: bsd.picarro.1minute.248m.dat: 100%|██████████| 1/1 [00:00<00:00,  8.59it/s]

Here decc_results will give us a dictionary with the UUIDs (universally unique identifiers) for each of the Datasources the data has been assigned to. This tells us that the data has been processed and stored correctly.

A note on Datasources

Datasources are objects that are stored in the object store (add link to object store notes) that hold the data and metadata associated with each measurement we upload to the platform.

For example, if we upload a file that contains readings for three gas species from a single site at a specific inlet height OpenGHG will assign this data to three different Datasources, one for each species. Metadata such as the site, inlet height, species, network etc are stored alongside the measurements for easy searching.

Datasources can also handle multiple versions of data from a single site, so if scales or other factors change multiple versions may be stored for easy future comparison.

Note

When you run this notebook different UUIDs will be created for the Datasources. This is expected as each time a Datasource is created from scratch it is assigned a unique UUID.

[5]:
decc_results
[5]:
defaultdict(dict,
            {'processed': {'bsd.picarro.1minute.248m.dat': {'ch4': 'db96c114-d48a-48c0-8ce0-d926045d7c6c',
               'co2': '0c4261eb-8042-47ea-a986-6540601e860a',
               'co': 'f34c9efb-00d6-4f90-8651-90c8cb0c6204'}}})

We can now process the AGAGE data. The functions that process the AGAGE data expect data to have an accompanying precisions file. For each data file we create a tuple with the data filename and the precisions filename. A simpler method of uploading these file types is planned.

We must create a tuple for each pair

list_of_tuples = [(data_filepath, precision_filepath), (d1, p1), (d2, p2), ...]
[6]:
agage_tuples = [('../data/AGAGE/capegrim-medusa.18.C', '../data/AGAGE/capegrim-medusa.18.precisions.C')]

Then we process the files as we did before the with DECC data, but this time changing the data type to the GCWERKS type and the network to AGAGE.

[7]:
agage_results = ObsSurface.read_file(filepath=agage_tuples, data_type="GCWERKS", site="CGO", network="AGAGE")
Processing: capegrim-medusa.18.C: 100%|██████████| 1/1 [00:01<00:00,  1.80s/it]

When viewing agage_results there will be a large number of Datasource UUIDs shown due to the large number of gases in each data file

[8]:
agage_results
[8]:
defaultdict(dict,
            {'processed': {'capegrim-medusa.18.C': {'nf3_70m': 'ff068c65-e4a3-47e0-bf17-b6abd50ddafc',
               'cf4_70m': '21e68b0d-d568-4611-aa1d-0d1f1d06bb71',
               'pfc116_70m': 'ba29b94d-473b-4bd3-825b-ba821f4f793d',
               'pfc218_70m': '03460ea4-d086-4fb7-9a9b-2abf5b4d3e4f',
               'pfc318_70m': '1bbc3165-cb7b-44cb-91d2-df7fc76fca66',
               'c4f10_70m': '49a154ef-c3b1-427a-84c5-d69c3fba28e0',
               'c6f14_70m': '957afaf3-e4b2-45b5-b83b-660b9c8a8dfc',
               'sf6_70m': '8ca5d4ec-2dc6-4cde-a9db-e0606c5cf4c5',
               'so2f2_70m': 'f122e0a1-5163-44c8-820c-cfca6100cf5c',
               'sf5cf3_70m': '889027f1-77c7-405f-b236-a1480057ed54',
               'hfc23_70m': '8c2d6c6c-1b2e-4864-8abf-192c718e4078',
               'hfc32_70m': '93a2b734-e039-45b6-a334-66fc3e2c7548',
               'hfc125_70m': 'ff882a47-ece0-4c93-b80f-b9d29bfae167',
               'hfc134a_70m': '157ccce6-fcce-4088-9957-49af6dbb9018',
               'hfc143a_70m': 'a44c9a89-314b-4d63-8d8f-3d9b21a3801d',
               'hfc152a_70m': 'bab60df3-8348-4095-94aa-1295864cf3ac',
               'hfc227ea_70m': '78397e3f-7fb7-4fb2-b77b-913c89b3de2e',
               'hfc236fa_70m': '0d718a9f-d941-40d4-9c45-9229781120c3',
               'hfc245fa_70m': 'eaa126e6-6a58-4a9c-9a31-81482f9a2255',
               'hfc365mfc_70m': 'ad8f3de9-c612-4d9d-8bbe-01a2362ef7a8',
               'hfc4310mee_70m': 'e7034837-fcdd-4e32-94c7-092f09523c47',
               'hcfc22_70m': '050a53a8-3923-4480-b84c-390cb59f32e0',
               'hcfc124_70m': '962680a8-5c29-4c11-a089-f9b2de6e3e25',
               'hcfc132b_70m': 'cbd75ead-6e4b-4098-b2aa-62631485ba9f',
               'hcfc133a_70m': '9cad3a35-a381-4fd1-9088-b26ce0f2ae62',
               'hcfc141b_70m': '644339cd-d280-4a18-b88c-05f7ab71c334',
               'hcfc142b_70m': 'df41d39f-0431-4ad4-b3f7-bc42b25ddf74',
               'cfc11_70m': '337aeb61-92a4-46ea-8779-11d14dd05fc0',
               'cfc12_70m': '8ef44dc2-6c48-47ea-a558-c8a9badfa4e2',
               'cfc13_70m': 'e13b8457-b9a8-4084-a8c7-b4d7d942436a',
               'cfc112_70m': '19d58c96-7628-4688-ad4a-62dd50c6afb3',
               'cfc113_70m': 'b31e0bd7-491c-437f-853a-874a3d3cdbe0',
               'cfc114_70m': 'd323cd83-382b-4071-ad72-6653b9165f1c',
               'cfc115_70m': '18cc5811-3f74-409e-ba28-c355ec1612d8',
               'h1211_70m': '0686b669-19ca-4533-96c0-354113d220f8',
               'h1301_70m': 'db12d15e-6828-4772-91f9-af313b94c45c',
               'h2402_70m': '02e697d6-7b89-409a-99ab-8b93774fbebd',
               'ch3cl_70m': '5f5c7064-7d61-4125-925f-05bf1bfa8f5a',
               'ch3br_70m': '6d524768-46e1-476d-b7d3-2250fe96b1ff',
               'ch3i_70m': '3c3a87ce-e690-4b67-8524-6ad971a00753',
               'ch2cl2_70m': '18747d7a-b0ed-4554-a86b-20206ddd1208',
               'chcl3_70m': '8a334095-7b64-4a31-b30d-6794c9ecaf28',
               'ccl4_70m': '9643d816-9d0d-43e6-9ab2-51b1da642f26',
               'ch2br2_70m': '43674cf8-6a7f-4b21-801f-82f90107dbfa',
               'chbr3_70m': '0be23db2-757c-42de-88b6-8936644864d3',
               'ch3ccl3_70m': '4eef0117-598a-4a71-ab2b-09c24afdbdc3',
               'tce_70m': 'b989238d-2daf-484b-8d62-0d6ac59cfa7f',
               'pce_70m': 'a2ca161f-c0bc-4156-b799-d356fd75bb19',
               'ethyne_70m': '7520f018-f54d-4cde-9905-881f690494ac',
               'ethane_70m': 'fb50261c-5fc9-4999-a3f2-230973592f92',
               'propane_70m': '7e268153-c9b4-41d9-b6c9-3bb4058e35c3',
               'cpropane_70m': 'd1f3c1e3-4879-42e1-9549-c64f713d60a1',
               'benzene_70m': 'cb0dc205-dd3a-4365-bee4-fa381ca6478e',
               'toluene_70m': '5cef20c9-1a77-402d-888b-d05d7565f440',
               'cos_70m': '4bc820c8-bbe3-4acd-8566-0936d75a29c7',
               'desflurane_70m': 'aba7a234-4fa5-42bc-8061-ce27a8765ddd'}}})

3. Visualising the object store

Now that we have a simple object store created we can view the objects within it in a simple force graph model. To do this we use the view_store function from the objectstore submodule. Note that the cell may take a few moments to load as the force graph is created.

In the force graph the central blue node is the ObsSurface node. Associated with this node are all the data processed by it. The next node in the topology are networks, shown in green. In the graph you will see DECC and AGAGE nodes. From these you’ll see site nodes in red and then individual datasources in orange.

Note

The object store visualisation created by this function is commented out here and won’t be visible in the documentation but can be uncommented and run when you use the notebook version.

[9]:
# visualise_store()

Now we know we have this data in the object store we can search it and retrieve data from it.

4. Retrieving data

To retrieve data from the object store we can use the get_obs_surface function from the localclient submodule. This allows us to retrieve and view the data stored.

[10]:
data = get_obs_surface(site="bsd", species="co", network="AGAGE")

If we view data we expect an ObsData object to have been returned

[11]:
data
[11]:
ObsData(data=<xarray.Dataset>
Dimensions:                    (time: 105)
Coordinates:
  * time                       (time) datetime64[ns] 2014-01-30T10:52:30 ... ...
Data variables:
    mf                         (time) float64 204.6 200.8 201.5 ... 196.9 196.3
    mf_variability             (time) float64 6.232 5.934 5.176 ... 6.031 6.879
    mf_number_of_observations  (time) float64 26.0 26.0 25.0 ... 26.0 26.0 25.0
Attributes: (12/22)
    data_owner:           Simon O'Doherty
    data_owner_email:     s.odoherty@bristol.ac.uk
    inlet_height_magl:    248m
    comment:              Cavity ring-down measurements. Output from GCWerks
    Conditions of use:    Ensure that you contact the data owner at the outse...
    Source:               In situ measurements of air
    ...                   ...
    sampling_period:      60
    inlet:                248m
    port:                 8
    type:                 air
    network:              decc
    scale:                WMO-X2014A, metadata={'data_owner': "Simon O'Doherty", 'data_owner_email': 's.odoherty@bristol.ac.uk', 'inlet_height_magl': '248m', 'comment': 'Cavity ring-down measurements. Output from GCWerks', 'Conditions of use': 'Ensure that you contact the data owner at the outset of your project.', 'Source': 'In situ measurements of air', 'Conventions': 'CF-1.6', 'File created': '2021-04-30 11:12:25.743021+00:00', 'Processed by': 'OpenGHG_Cloud', 'species': 'co', 'station_longitude': -1.15033, 'station_latitude': 54.35858, 'station_long_name': 'Bilsdale, UK', 'station_height_masl': 380.0, 'site': 'bsd', 'instrument': 'picarro', 'sampling_period': 60, 'inlet': '248m', 'port': '8', 'type': 'air', 'network': 'decc', 'scale': 'WMO-X2014A'})

First we tell matplotlib that we are plotting inside a Jupyter notebook, this ensures a plot with controls is created.

[12]:
%matplotlib notebook
INFO:matplotlib.font_manager:Failed to extract font properties from /usr/share/fonts/truetype/noto/NotoColorEmoji.ttf: In FT2Font: Can not load face.  Unknown file format.
INFO:matplotlib.font_manager:generated new fontManager
[13]:
example_data = data.data
mol_frac = example_data.mf
mol_frac.plot()
[13]:
[<matplotlib.lines.Line2D at 0x7f8e1ee37b90>]

5. Ranking data

The dates that the data from Heathfield retrieved above overlap. If we want to easily retrieve the highest quality data from Heathfield over a range of dates we don’t want to have to repeatedly check which was the correct inlet/instrument for a given daterange. This problem is solved using ranking.

A given inlet on a specific instrument at a site can be given a rank for a daterange. To do this we use the RankSources class from the localclient submodule.

The rest of tutorial requires updating, the cells below will not work.

[14]:
#r = RankSources()

#r.get_sources(site="hfd", species="co")

The returned dictionary gives us two keys, one for each inlet height. To rank a source we use the set_rank method which expects two arguments: rank_key which is the key given to each source in the dict above and rank_data a dictionary of the form

rank_data = {co2_hfd_50m_picarro: {1: [daterange_1], 2: [daterange_2]}}

We can create this dictionary using a helper method of RankSources called create_daterange as shown below.

[15]:
#daterange_100m = r.create_daterange(start="2013-11-01", end="2016-01-01")

This creates a daterange string that will be understood by openghg. We can then place this in a list to create our rank_data dictionary.

[16]:
#rank_data = {"co_hfd_100m_picarro": {"1": [daterange_100m]}}
[17]:
#rank_data

Now we can set the rank of the network using set_rank

[18]:
#r.set_rank(rank_key="co_hfd_100m_picarro", rank_data=rank_data)

We can now check the rank for this inlet again to check it’s been set correctly

[19]:
#r.get_sources(site="hfd", species="co")

We can now see

'co_hfd_100m_picarro': {'rank': defaultdict(list, {'1': ['2013-11-01T00:00:00_2016-01-01T00:00:00']})

Which tells us the rank was set correctly over the daterange that we specified. We can now search for data and we’ll automatically get the highest ranked data.

Let’s search for CO2 data at Heathfield between 2014 - 2015, dates covered by both inlets.

[20]:
#updated_data = get_obs_surface(site="hfd", species="co", network="AGAGE")
[21]:
#updated_data

Now we get the highest ranked data returned to us without the need to specify an inlet height or instrument.

If we know that we want data from the 50m inlet we can still specify this in the search and get that data

[22]:
#fiftym_data = get_obs_surface(site="hfd", species="co", network="AGAGE", inlet="50m")
[23]:
#fiftym_data

We can also view the ranks we have given to data with a similar layout to the object store visualisation we created earlier.

To do this we use the visualise_rankings method of of the RankSources class. In this figure we’ll only see Datasources that contain ranked data. Hover over the nodes for further information.

The rankings visualisation created by this function is commented out here and won’t be visible in the documentation but can be uncommented and run when you use the notebook version.

[24]:
# r.visualise_rankings()

If you used the tmp_dir as a location for your object store at the start of the tutorial you can run the cell below to remove any files that were created.

[25]:
tmp_dir.cleanup()

Further tutorials will be added soon. If you want to explore the internal workings of OpenGHG please checkout the Developer API documentation, if you would like contribute to the project we welcome pull requests to both the code and the documentation. For help and guidance on contributing check our contributing page.