In [1]:
%matplotlib inline
import pandas as pd
import socket
host = socket.getfqdn()

from core import  load, zoom, calc, save,plots,monitor
In [2]:
#reload funcs after updating ./core/*.py
import importlib
importlib.reload(load)
importlib.reload(zoom)
importlib.reload(calc)
importlib.reload(save)
importlib.reload(plots)
importlib.reload(monitor)
Out[2]:
<module 'core.monitor' from '/ccc/work/cont003/gen7420/odakatin/monitor-sedna/notebook/core/monitor.py'>

If you submit the job with job scheduler, above¶

below are list of enviroment variable one can pass

%env local='2"¶

local : if True run dask local cluster, if not true, put number of workers setted in the 'local' if no 'local ' given, local will be setted automatically to 'True'

%env ychunk='2'¶

%env tchunk='2'¶

controls chunk. 'False' sets no modification from original netcdf file's chunk.¶

ychunk=10 will group the original netcdf file to 10 by 10¶

tchunk=1 will chunk the time coordinate one by one¶

%env file_exp=¶

'file_exp': Which 'experiment' name is it?¶

. this corresopnds to intake catalog name without path and .yaml¶

%env year=¶

for Validation, this correspoinds to path/year/month 's year¶

for monitoring, this corresponids to 'date' having * means do all files in the monitoring directory¶

setting it as 0[0-9] &1[0-9]& *[2-3][0-9], the job can be separated in three lots.¶

%env month=¶

for monitoring this corresponds to file path path-XIOS.{month}/¶

#

%env control=FWC_SSH¶

name of control file to be used for computation/plots/save/¶

AWTD.sh M_AWTMD

Ice_quant_flux.sh M_Fluxnet M_Ice_quantities

FWC_SSH.sh M_FWC_2D M_FWC_integrals M_FWC_SSH M_SSH_anomaly

IceClim.sh M_IceClim M_IceConce M_IceThick

M_Mean_temp_velo M_MLD_2D M_Mooring M_Sectionx M_Sectiony

%env save= proceed saving? True or False , Default is setted as True¶

%env plot= proceed plotting? True or False , Default is setted as True¶

%env calc= proceed computation? or just load computed result? True or False , Default is setted as True¶

%env save=False¶

%env lazy=False¶

For debugging this cell can help¶

%env file_exp=SEDNA_DELTA_MONITOR %env year=2012 %env month=01

0[1-2]¶

%env ychunk=10 %env ychunk=False %env save=False %env plot=True %env calc=True # %env lazy=False

False¶

%env control=M_Fluxnet

M_Sectiony ok with ychunk=False local=True lazy=False¶

In [3]:
%%time
# 'savefig': Do we save output in html? or not. keep it true. 
savefig=True
client,cluster,control,catalog_url,month,year,daskreport,outputpath = load.set_control(host)
!mkdir -p $outputpath
!mkdir -p $daskreport
client
local True
using host= irene5760.c-irene.mg1.tgcc.ccc.cea.fr starting dask cluster on local= True workers 16
10000000000
False
rome local cluster starting
This code is running on  irene5760.c-irene.mg1.tgcc.ccc.cea.fr using  SEDNA_DELTA_MONITOR file experiment, read from  ../lib/SEDNA_DELTA_MONITOR.yaml  on year= 2012  on month= 02  outputpath= ../results/SEDNA_DELTA_MONITOR/ daskreport= ../results/dask/6417345irene5760.c-irene.mg1.tgcc.ccc.cea.fr_SEDNA_DELTA_MONITOR_02M_AWTMD/
CPU times: user 551 ms, sys: 123 ms, total: 674 ms
Wall time: 21 s
Out[3]:

Client

Client-e95f7a60-13c4-11ed-980b-080038b94703

Connection method: Cluster object Cluster type: distributed.LocalCluster
Dashboard: http://127.0.0.1:8787/status

Cluster Info

LocalCluster

16cfee0f

Dashboard: http://127.0.0.1:8787/status Workers: 16
Total threads: 128 Total memory: 251.06 GiB
Status: running Using processes: True

Scheduler Info

Scheduler

Scheduler-eb402627-a971-4dab-8523-0b14f15ebda6

Comm: tcp://127.0.0.1:41593 Workers: 16
Dashboard: http://127.0.0.1:8787/status Total threads: 128
Started: Just now Total memory: 251.06 GiB

Workers

Worker: 0

Comm: tcp://127.0.0.1:37201 Total threads: 8
Dashboard: http://127.0.0.1:39195/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:46609
Local directory: /tmp/dask-worker-space/worker-v8he8570

Worker: 1

Comm: tcp://127.0.0.1:43449 Total threads: 8
Dashboard: http://127.0.0.1:44524/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:38145
Local directory: /tmp/dask-worker-space/worker-yconbkbw

Worker: 2

Comm: tcp://127.0.0.1:37204 Total threads: 8
Dashboard: http://127.0.0.1:43784/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:45446
Local directory: /tmp/dask-worker-space/worker-xcon70kf

Worker: 3

Comm: tcp://127.0.0.1:36954 Total threads: 8
Dashboard: http://127.0.0.1:38291/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:37406
Local directory: /tmp/dask-worker-space/worker-9k7xjy8a

Worker: 4

Comm: tcp://127.0.0.1:39981 Total threads: 8
Dashboard: http://127.0.0.1:42248/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:44406
Local directory: /tmp/dask-worker-space/worker-fkj4sd2h

Worker: 5

Comm: tcp://127.0.0.1:37000 Total threads: 8
Dashboard: http://127.0.0.1:44612/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:44509
Local directory: /tmp/dask-worker-space/worker-c4otuop_

Worker: 6

Comm: tcp://127.0.0.1:46723 Total threads: 8
Dashboard: http://127.0.0.1:41721/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:45441
Local directory: /tmp/dask-worker-space/worker-us38neek

Worker: 7

Comm: tcp://127.0.0.1:38896 Total threads: 8
Dashboard: http://127.0.0.1:42617/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:46317
Local directory: /tmp/dask-worker-space/worker-ujtoofxm

Worker: 8

Comm: tcp://127.0.0.1:35753 Total threads: 8
Dashboard: http://127.0.0.1:44062/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:46241
Local directory: /tmp/dask-worker-space/worker-fu5oyo77

Worker: 9

Comm: tcp://127.0.0.1:39876 Total threads: 8
Dashboard: http://127.0.0.1:43173/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:45232
Local directory: /tmp/dask-worker-space/worker-3jw5f_ae

Worker: 10

Comm: tcp://127.0.0.1:41591 Total threads: 8
Dashboard: http://127.0.0.1:43600/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:40295
Local directory: /tmp/dask-worker-space/worker-_ej58v5s

Worker: 11

Comm: tcp://127.0.0.1:45612 Total threads: 8
Dashboard: http://127.0.0.1:44003/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:38358
Local directory: /tmp/dask-worker-space/worker-npas1g2x

Worker: 12

Comm: tcp://127.0.0.1:41437 Total threads: 8
Dashboard: http://127.0.0.1:44695/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:37607
Local directory: /tmp/dask-worker-space/worker-q1he6uep

Worker: 13

Comm: tcp://127.0.0.1:37950 Total threads: 8
Dashboard: http://127.0.0.1:34089/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:44888
Local directory: /tmp/dask-worker-space/worker-5jhk27lb

Worker: 14

Comm: tcp://127.0.0.1:34179 Total threads: 8
Dashboard: http://127.0.0.1:35322/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:35861
Local directory: /tmp/dask-worker-space/worker-pqj64x9j

Worker: 15

Comm: tcp://127.0.0.1:40153 Total threads: 8
Dashboard: http://127.0.0.1:41832/status Memory: 15.69 GiB
Nanny: tcp://127.0.0.1:40525
Local directory: /tmp/dask-worker-space/worker-mdcg4c1l

read plotting information from a csv file¶

In [4]:
df=load.controlfile(control)
#Take out 'later' tagged computations
#df=df[~df['Value'].str.contains('later')]
df
Out[4]:
Value Inputs Equation Zone Plot Colourmap MinMax Unit Oldname Unnamed: 10
AW_maxtemp_depth gridT.votemper,gridS.vosaline,param.mask,param... calc.AWTD4(data) ALL AWTD_map jet (0,800) m M-5

Computation starts here¶

Each computation consists of

  1. Load NEMO data set
  2. Zoom data set
  3. Compute (or load computed data set)
  4. Save
  5. Plot
  6. Close
In [5]:
%%time
import os
calcswitch=os.environ.get('calc', 'True') 
lazy=os.environ.get('lazy','False' )
loaddata=((df.Inputs != '').any()) 
print('calcswitch=',calcswitch,'df.Inputs != nothing',loaddata, 'lazy=',lazy)
data = load.datas(catalog_url,df.Inputs,month,year,daskreport,lazy=lazy) if ((calcswitch=='True' )*loaddata) else 0 
data
calcswitch= True df.Inputs != nothing True lazy= False
../lib/SEDNA_DELTA_MONITOR.yaml
using param_xios reading  ../lib/SEDNA_DELTA_MONITOR.yaml
using param_xios reading  <bound method DataSourceBase.describe of sources:
  param_xios:
    args:
      combine: nested
      concat_dim: y
      urlpath: /ccc/work/cont003/gen7420/odakatin/CONFIGS/SEDNA/SEDNA-I/SEDNA_Domain_cfg_Tgt_20210423_tsh10m_L1/param_f32/x_*.nc
      xarray_kwargs:
        compat: override
        coords: minimal
        data_vars: minimal
        parallel: true
    description: SEDNA NEMO parameters from MPI output  nav_lon lat fails
    driver: intake_xarray.netcdf.NetCDFSource
    metadata:
      catalog_dir: /ccc/work/cont003/gen7420/odakatin/monitor-sedna/notebook/../lib/
>
{'name': 'param_xios', 'container': 'xarray', 'plugin': ['netcdf'], 'driver': ['netcdf'], 'description': 'SEDNA NEMO parameters from MPI output  nav_lon lat fails', 'direct_access': 'forbid', 'user_parameters': [{'name': 'path', 'description': 'file coordinate', 'type': 'str', 'default': '/ccc/work/cont003/gen7420/odakatin/CONFIGS/SEDNA/MESH/SEDNA_mesh_mask_Tgt_20210423_tsh10m_L1/param'}], 'metadata': {}, 'args': {'urlpath': '/ccc/work/cont003/gen7420/odakatin/CONFIGS/SEDNA/SEDNA-I/SEDNA_Domain_cfg_Tgt_20210423_tsh10m_L1/param_f32/x_*.nc', 'combine': 'nested', 'concat_dim': 'y'}}
0 read gridS ['vosaline']
lazy= False
using load_data_xios_kerchunk reading  gridS
using load_data_xios_kerchunk reading  <bound method DataSourceBase.describe of sources:
  data_xios_kerchunk:
    args:
      consolidated: false
      storage_options:
        fo: file:////ccc/cont003/home/ra5563/ra5563/catalogue/DELTA/201202/gridS_0[0-5][0-9][0-9].json
        target_protocol: file
      urlpath: reference://
    description: CREG025 NEMO outputs from different xios server in kerchunk format
    driver: intake_xarray.xzarr.ZarrSource
    metadata:
      catalog_dir: /ccc/work/cont003/gen7420/odakatin/monitor-sedna/notebook/../lib/
>
      took 43.54166221618652 seconds
0 merging gridS ['vosaline']
1 read gridT ['votemper']
lazy= False
using load_data_xios_kerchunk reading  gridT
using load_data_xios_kerchunk reading  <bound method DataSourceBase.describe of sources:
  data_xios_kerchunk:
    args:
      consolidated: false
      storage_options:
        fo: file:////ccc/cont003/home/ra5563/ra5563/catalogue/DELTA/201202/gridT_0[0-5][0-9][0-9].json
        target_protocol: file
      urlpath: reference://
    description: CREG025 NEMO outputs from different xios server in kerchunk format
    driver: intake_xarray.xzarr.ZarrSource
    metadata:
      catalog_dir: /ccc/work/cont003/gen7420/odakatin/monitor-sedna/notebook/../lib/
>
      took 39.78523826599121 seconds
1 merging gridT ['votemper']
      took 1.4800565242767334 seconds
param depth will be included in data
param nav_lat will be included in data
param nav_lon will be included in data
param mask will be included in data
param mask2d will be included in data
CPU times: user 37.7 s, sys: 5.74 s, total: 43.4 s
Wall time: 1min 48s
Out[5]:
<xarray.Dataset>
Dimensions:        (t: 28, z: 150, y: 6540, x: 6560)
Coordinates:
    time_centered  (t) object dask.array<chunksize=(1,), meta=np.ndarray>
  * t              (t) object 2012-02-01 12:00:00 ... 2012-02-28 12:00:00
  * y              (y) int64 1 2 3 4 5 6 7 ... 6535 6536 6537 6538 6539 6540
  * x              (x) int64 1 2 3 4 5 6 7 ... 6555 6556 6557 6558 6559 6560
  * z              (z) int64 1 2 3 4 5 6 7 8 ... 143 144 145 146 147 148 149 150
    depth          (z, y, x) float32 dask.array<chunksize=(150, 13, 6560), meta=np.ndarray>
    nav_lat        (y, x) float32 dask.array<chunksize=(13, 6560), meta=np.ndarray>
    nav_lon        (y, x) float32 dask.array<chunksize=(13, 6560), meta=np.ndarray>
    mask           (z, y, x) bool dask.array<chunksize=(150, 13, 6560), meta=np.ndarray>
    mask2d         (y, x) bool dask.array<chunksize=(13, 6560), meta=np.ndarray>
Data variables:
    vosaline       (t, z, y, x) float32 dask.array<chunksize=(1, 150, 13, 6560), meta=np.ndarray>
    votemper       (t, z, y, x) float32 dask.array<chunksize=(1, 150, 13, 6560), meta=np.ndarray>
Attributes: (12/26)
    CASE:                    DELTA
    CONFIG:                  SEDNA
    Conventions:             CF-1.6
    DOMAIN_dimensions_ids:   [2, 3]
    DOMAIN_halo_size_end:    [0, 0]
    DOMAIN_halo_size_start:  [0, 0]
    ...                      ...
    nj:                      13
    output_frequency:        1d
    start_date:              20090101
    timeStamp:               2022-Jan-18 16:51:26 GMT
    title:                   ocean T grid variables
    uuid:                    6ca3a74a-269a-44e2-91db-2aea875dbf84
xarray.Dataset
    • t: 28
    • z: 150
    • y: 6540
    • x: 6560
    • time_centered
      (t)
      object
      dask.array<chunksize=(1,), meta=np.ndarray>
      bounds :
      time_centered_bounds
      long_name :
      Time axis
      standard_name :
      time
      time_origin :
      1900-01-01 00:00:00
      Array Chunk
      Bytes 224 B 8 B
      Shape (28,) (1,)
      Count 171 Tasks 28 Chunks
      Type object numpy.ndarray
      28 1
    • t
      (t)
      object
      2012-02-01 12:00:00 ... 2012-02-...
      axis :
      T
      bounds :
      time_counter_bounds
      long_name :
      Time axis
      standard_name :
      time
      time_origin :
      1900-01-01 00:00:00
      array([cftime.DatetimeNoLeap(2012, 2, 1, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 2, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 3, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 4, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 5, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 6, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 7, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 8, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 9, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 10, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 11, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 12, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 13, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 14, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 15, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 16, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 17, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 18, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 19, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 20, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 21, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 22, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 23, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 24, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 25, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 26, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 27, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 28, 12, 0, 0, 0, has_year_zero=True)],
            dtype=object)
    • y
      (y)
      int64
      1 2 3 4 5 ... 6537 6538 6539 6540
      array([   1,    2,    3, ..., 6538, 6539, 6540])
    • x
      (x)
      int64
      1 2 3 4 5 ... 6557 6558 6559 6560
      array([   1,    2,    3, ..., 6558, 6559, 6560])
    • z
      (z)
      int64
      1 2 3 4 5 6 ... 146 147 148 149 150
      array([  1,   2,   3,   4,   5,   6,   7,   8,   9,  10,  11,  12,  13,  14,
              15,  16,  17,  18,  19,  20,  21,  22,  23,  24,  25,  26,  27,  28,
              29,  30,  31,  32,  33,  34,  35,  36,  37,  38,  39,  40,  41,  42,
              43,  44,  45,  46,  47,  48,  49,  50,  51,  52,  53,  54,  55,  56,
              57,  58,  59,  60,  61,  62,  63,  64,  65,  66,  67,  68,  69,  70,
              71,  72,  73,  74,  75,  76,  77,  78,  79,  80,  81,  82,  83,  84,
              85,  86,  87,  88,  89,  90,  91,  92,  93,  94,  95,  96,  97,  98,
              99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112,
             113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126,
             127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140,
             141, 142, 143, 144, 145, 146, 147, 148, 149, 150])
    • depth
      (z, y, x)
      float32
      dask.array<chunksize=(150, 13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 23.97 GiB 48.80 MiB
      Shape (150, 6540, 6560) (150, 13, 6560)
      Count 1632 Tasks 544 Chunks
      Type float32 numpy.ndarray
      6560 6540 150
    • nav_lat
      (y, x)
      float32
      dask.array<chunksize=(13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 163.66 MiB 333.12 kiB
      Shape (6540, 6560) (13, 6560)
      Count 1632 Tasks 544 Chunks
      Type float32 numpy.ndarray
      6560 6540
    • nav_lon
      (y, x)
      float32
      dask.array<chunksize=(13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 163.66 MiB 333.12 kiB
      Shape (6540, 6560) (13, 6560)
      Count 1632 Tasks 544 Chunks
      Type float32 numpy.ndarray
      6560 6540
    • mask
      (z, y, x)
      bool
      dask.array<chunksize=(150, 13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 5.99 GiB 12.20 MiB
      Shape (150, 6540, 6560) (150, 13, 6560)
      Count 1632 Tasks 544 Chunks
      Type bool numpy.ndarray
      6560 6540 150
    • mask2d
      (y, x)
      bool
      dask.array<chunksize=(13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 40.91 MiB 83.28 kiB
      Shape (6540, 6560) (13, 6560)
      Count 1632 Tasks 544 Chunks
      Type bool numpy.ndarray
      6560 6540
    • vosaline
      (t, z, y, x)
      float32
      dask.array<chunksize=(1, 150, 13, 6560), meta=np.ndarray>
      cell_methods :
      time: mean (interval: 40 s)
      interval_operation :
      40 s
      interval_write :
      1 d
      long_name :
      salinity
      online_operation :
      average
      standard_name :
      sea_water_practical_salinity
      units :
      1e-3
      Array Chunk
      Bytes 671.26 GiB 48.80 MiB
      Shape (28, 150, 6540, 6560) (1, 150, 13, 6560)
      Count 31008 Tasks 15232 Chunks
      Type float32 numpy.ndarray
      28 1 6560 6540 150
    • votemper
      (t, z, y, x)
      float32
      dask.array<chunksize=(1, 150, 13, 6560), meta=np.ndarray>
      cell_methods :
      time: mean (interval: 40 s)
      interval_operation :
      40 s
      interval_write :
      1 d
      long_name :
      temperature
      online_operation :
      average
      standard_name :
      sea_water_potential_temperature
      units :
      degC
      Array Chunk
      Bytes 671.26 GiB 48.80 MiB
      Shape (28, 150, 6540, 6560) (1, 150, 13, 6560)
      Count 31008 Tasks 15232 Chunks
      Type float32 numpy.ndarray
      28 1 6560 6540 150
  • CASE :
    DELTA
    CONFIG :
    SEDNA
    Conventions :
    CF-1.6
    DOMAIN_dimensions_ids :
    [2, 3]
    DOMAIN_halo_size_end :
    [0, 0]
    DOMAIN_halo_size_start :
    [0, 0]
    DOMAIN_number :
    0
    DOMAIN_number_total :
    544
    DOMAIN_position_first :
    [1, 1]
    DOMAIN_position_last :
    [6560, 13]
    DOMAIN_size_global :
    [6560, 6540]
    DOMAIN_size_local :
    [6560, 13]
    DOMAIN_type :
    box
    NCO :
    netCDF Operators version 4.9.1 (Homepage = http://nco.sf.net, Code = http://github.com/nco/nco)
    description :
    ocean T grid variables
    history :
    Wed Jan 19 12:40:52 2022: ncks -4 -L 1 SEDNA-DELTA_1d_gridS_201202-201202_NOZIP_0000.nc /ccc/scratch/cont003/gen7420/talandel/SEDNA/SEDNA-DELTA-S/SPLIT/1d/2012/02/SEDNA-DELTA_1d_gridS_201202-201202_0000.nc Wed Jan 19 12:40:28 2022: ncrcat -n 28,2,1 SEDNA-DELTA_1d_gridS_0000_01.nc SEDNA-DELTA_1d_gridS_201202-201202_NOZIP_0000.nc
    ibegin :
    0
    jbegin :
    0
    name :
    /ccc/scratch/cont003/ra5563/talandel/ONGOING-RUNS/SEDNA-DELTA-XIOS.47/SEDNA-DELTA_1d_gridS
    ni :
    6560
    nj :
    13
    output_frequency :
    1d
    start_date :
    20090101
    timeStamp :
    2022-Jan-18 16:51:26 GMT
    title :
    ocean T grid variables
    uuid :
    6ca3a74a-269a-44e2-91db-2aea875dbf84
In [6]:
%%time
monitor.auto(df,data,savefig,daskreport,outputpath,file_exp='SEDNA'
            )
#calc= True
#save= True
#plot= False
Value='AW_maxtemp_depth'
Zone='ALL'
Plot='AWTD_map'
cmap='jet'
clabel='m'
clim= (0, 800)
outputpath='../results/SEDNA_DELTA_MONITOR/'
nc_outputpath='../nc_results/SEDNA_DELTA_MONITOR/'
filename='SEDNA_AWTD_map_ALL_AW_maxtemp_depth'
data=monitor.optimize_dataset(data)
#3 Start computing 
data= calc.AWTD4(data)
monitor.optimize_dataset(data)
add optimise here once otimise can recognise
<xarray.Dataset>
Dimensions:        (t: 28, y: 6540, x: 6560)
Coordinates:
    time_centered  (t) object dask.array<chunksize=(1,), meta=np.ndarray>
  * t              (t) object 2012-02-01 12:00:00 ... 2012-02-28 12:00:00
  * y              (y) int64 1 2 3 4 5 6 7 ... 6535 6536 6537 6538 6539 6540
  * x              (x) int64 1 2 3 4 5 6 7 ... 6555 6556 6557 6558 6559 6560
    nav_lat        (y, x) float32 dask.array<chunksize=(13, 6560), meta=np.ndarray>
    nav_lon        (y, x) float32 dask.array<chunksize=(13, 6560), meta=np.ndarray>
    mask2d         (y, x) bool dask.array<chunksize=(13, 6560), meta=np.ndarray>
Data variables:
    AWT            (t, y, x) float32 dask.array<chunksize=(1, 13, 6560), meta=np.ndarray>
    AWD            (t, y, x) float32 dask.array<chunksize=(1, 13, 6560), meta=np.ndarray>
xarray.Dataset
    • t: 28
    • y: 6540
    • x: 6560
    • time_centered
      (t)
      object
      dask.array<chunksize=(1,), meta=np.ndarray>
      bounds :
      time_centered_bounds
      long_name :
      Time axis
      standard_name :
      time
      time_origin :
      1900-01-01 00:00:00
      Array Chunk
      Bytes 224 B 8 B
      Shape (28,) (1,)
      Count 171 Tasks 28 Chunks
      Type object numpy.ndarray
      28 1
    • t
      (t)
      object
      2012-02-01 12:00:00 ... 2012-02-...
      array([cftime.DatetimeNoLeap(2012, 2, 1, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 2, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 3, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 4, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 5, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 6, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 7, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 8, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 9, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 10, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 11, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 12, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 13, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 14, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 15, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 16, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 17, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 18, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 19, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 20, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 21, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 22, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 23, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 24, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 25, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 26, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 27, 12, 0, 0, 0, has_year_zero=True),
             cftime.DatetimeNoLeap(2012, 2, 28, 12, 0, 0, 0, has_year_zero=True)],
            dtype=object)
    • y
      (y)
      int64
      1 2 3 4 5 ... 6537 6538 6539 6540
      array([   1,    2,    3, ..., 6538, 6539, 6540])
    • x
      (x)
      int64
      1 2 3 4 5 ... 6557 6558 6559 6560
      array([   1,    2,    3, ..., 6558, 6559, 6560])
    • nav_lat
      (y, x)
      float32
      dask.array<chunksize=(13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 163.66 MiB 333.12 kiB
      Shape (6540, 6560) (13, 6560)
      Count 1632 Tasks 544 Chunks
      Type float32 numpy.ndarray
      6560 6540
    • nav_lon
      (y, x)
      float32
      dask.array<chunksize=(13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 163.66 MiB 333.12 kiB
      Shape (6540, 6560) (13, 6560)
      Count 1632 Tasks 544 Chunks
      Type float32 numpy.ndarray
      6560 6540
    • mask2d
      (y, x)
      bool
      dask.array<chunksize=(13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 40.91 MiB 83.28 kiB
      Shape (6540, 6560) (13, 6560)
      Count 1632 Tasks 544 Chunks
      Type bool numpy.ndarray
      6560 6540
    • AWT
      (t, y, x)
      float32
      dask.array<chunksize=(1, 13, 6560), meta=np.ndarray>
      long_name :
      AWT
      Array Chunk
      Bytes 4.48 GiB 333.12 kiB
      Shape (28, 6540, 6560) (1, 13, 6560)
      Count 155584 Tasks 15232 Chunks
      Type float32 numpy.ndarray
      6560 6540 28
    • AWD
      (t, y, x)
      float32
      dask.array<chunksize=(1, 13, 6560), meta=np.ndarray>
      Array Chunk
      Bytes 4.48 GiB 333.12 kiB
      Shape (28, 6540, 6560) (1, 13, 6560)
      Count 309536 Tasks 15232 Chunks
      Type float32 numpy.ndarray
      6560 6540 28
#4 Saving  SEDNA_AWTD_map_ALL_AW_maxtemp_depth
data=save.datas(data,plot=Plot,path=nc_outputpath,filename=filename)
start saving data
saving data in a file
t (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1)
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
slice(0, 1, None)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
slice(1, 2, None)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
slice(2, 3, None)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/dask/array/reductions.py:608: RuntimeWarning: All-NaN slice encountered
  return np.nanmax(x_chunk, axis=axis, keepdims=keepdims)
slice(3, 4, None)
slice(4, 5, None)
slice(5, 6, None)
slice(6, 7, None)
slice(7, 8, None)
slice(8, 9, None)
slice(9, 10, None)
slice(10, 11, None)
slice(11, 12, None)
slice(12, 13, None)
slice(13, 14, None)
slice(14, 15, None)
2022-08-04 09:24:05,036 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:24:06,875 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(15, 16, None)
2022-08-04 09:24:34,756 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:24:37,464 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:24:42,208 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
slice(16, 17, None)
2022-08-04 09:25:06,966 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:25:09,554 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:25:21,638 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(17, 18, None)
2022-08-04 09:25:41,222 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:25:44,605 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:26:06,196 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(18, 19, None)
2022-08-04 09:26:15,020 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:26:17,449 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(19, 20, None)
2022-08-04 09:26:41,071 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:26:47,995 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:26:50,516 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(20, 21, None)
2022-08-04 09:27:14,031 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:27:21,320 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:27:23,281 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(21, 22, None)
2022-08-04 09:27:50,639 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:27:53,571 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:27:54,946 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
slice(22, 23, None)
2022-08-04 09:28:22,493 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:28:25,276 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:28:26,874 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
slice(23, 24, None)
2022-08-04 09:28:53,955 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:28:56,618 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:29:07,475 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
slice(24, 25, None)
2022-08-04 09:29:26,933 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:29:29,652 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:29:59,349 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(25, 26, None)
2022-08-04 09:30:08,041 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:30:10,586 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
slice(26, 27, None)
2022-08-04 09:30:33,973 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:30:41,030 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:30:43,503 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
slice(27, 28, None)
2022-08-04 09:31:10,471 - distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
2022-08-04 09:31:13,379 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
2022-08-04 09:31:14,812 - distributed.utils_perf - WARNING - full garbage collections took 11% CPU time recently (threshold: 10%)
CPU times: user 9min 43s, sys: 1min 20s, total: 11min 3s
Wall time: 16min 34s