%matplotlib inline
import pandas as pd
import socket
host = socket.getfqdn()
from core import load, zoom, calc, save,plots,monitor
#reload funcs after updating ./core/*.py
import importlib
importlib.reload(load)
importlib.reload(zoom)
importlib.reload(calc)
importlib.reload(save)
importlib.reload(plots)
importlib.reload(monitor)
<module 'core.monitor' from '/ccc/work/cont003/gen7420/odakatin/monitor-sedna/notebook/core/monitor.py'>
If you submit the job with job scheduler; below are list of enviroment variable one can pass
local : if True run dask local cluster, if not true, put number of workers setted in the 'local' if no 'local ' given, local will be setted automatically to 'True'
%env ychunk='2', #%env tchunk='2'
controls chunk. 'False' sets no modification from original netcdf file's chunk.
ychunk=10 will group the original netcdf file to 10 by 10
tchunk=1 will chunk the time coordinate one by one
%env file_exp=
'file_exp': Which 'experiment' name is it? this corresopnds to intake catalog name without path and .yaml
#%env year=
for Validation, this correspoinds to path/year/month 's year for monitoring, this corresponids to 'date' having means do all files in the monitoring directory setting it as 0[0-9] &1[0-9]& [2-3][0-9], the job can be separated in three lots. For DELTA experiment, year corresponds to really 'year'
%env month=
for monitoring this corresponds to file path path-XIOS.{month}/
For DELTA experiment, year corresponds to really 'month'
proceed saving? True or False , Default is setted as True
proceed plotting? True or False , Default is setted as True
proceed computation? or just load computed result? True or False , Default is setted as True
save output file used for plotting
using kerchunked file -> False, not using kerhcunk -> True
name of control file to be used for computation/plots/save/ We have number of M_xxx.csv
Monitor.sh calls M_MLD_2D
and AWTD.sh, Fluxnet.sh, Siconc.sh, IceClim.sh, FWC_SSH.sh, Integrals.sh , Sections.sh
M_AWTMD
M_Fluxnet
M_Ice_quantities
M_IceClim M_IceConce M_IceThick
M_FWC_2D M_FWC_integrals M_FWC_SSH M_SSH_anomaly
M_Mean_temp_velo M_Mooring
M_Sectionx M_Sectiony
%%time
# 'savefig': Do we save output in html? or not. keep it true.
savefig=True
client,cluster,control,catalog_url,month,year,daskreport,outputpath = load.set_control(host)
!mkdir -p $outputpath
!mkdir -p $daskreport
client
local True using host= irene4200.c-irene.mg1.tgcc.ccc.cea.fr starting dask cluster on local= True workers 16 10000000000 rome local cluster starting This code is running on irene4200.c-irene.mg1.tgcc.ccc.cea.fr using SEDNA_DELTA_MONITOR file experiment, read from ../lib/SEDNA_DELTA_MONITOR.yaml on year= 2012 on month= 04 outputpath= ../results/SEDNA_DELTA_MONITOR/ daskreport= ../results/dask/6462438irene4200.c-irene.mg1.tgcc.ccc.cea.fr_SEDNA_DELTA_MONITOR_04M_IceClim/ CPU times: user 495 ms, sys: 131 ms, total: 626 ms Wall time: 21.5 s
Client-28248a5f-1809-11ed-af68-080038b93683
Connection method: Cluster object | Cluster type: distributed.LocalCluster |
Dashboard: http://127.0.0.1:8787/status |
c7f00ad9
Dashboard: http://127.0.0.1:8787/status | Workers: 16 |
Total threads: 128 | Total memory: 251.06 GiB |
Status: running | Using processes: True |
Scheduler-dd05e0ed-2c51-4bca-846e-25aa50645295
Comm: tcp://127.0.0.1:40207 | Workers: 16 |
Dashboard: http://127.0.0.1:8787/status | Total threads: 128 |
Started: Just now | Total memory: 251.06 GiB |
Comm: tcp://127.0.0.1:34915 | Total threads: 8 |
Dashboard: http://127.0.0.1:34177/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:38563 | |
Local directory: /tmp/dask-worker-space/worker-qktz_569 |
Comm: tcp://127.0.0.1:42677 | Total threads: 8 |
Dashboard: http://127.0.0.1:33019/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:33137 | |
Local directory: /tmp/dask-worker-space/worker-3h89wrxu |
Comm: tcp://127.0.0.1:33255 | Total threads: 8 |
Dashboard: http://127.0.0.1:38135/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:45642 | |
Local directory: /tmp/dask-worker-space/worker-yretpq8z |
Comm: tcp://127.0.0.1:35365 | Total threads: 8 |
Dashboard: http://127.0.0.1:43296/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:36427 | |
Local directory: /tmp/dask-worker-space/worker-9jvqa504 |
Comm: tcp://127.0.0.1:39198 | Total threads: 8 |
Dashboard: http://127.0.0.1:37494/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:44291 | |
Local directory: /tmp/dask-worker-space/worker-2_zrf483 |
Comm: tcp://127.0.0.1:33658 | Total threads: 8 |
Dashboard: http://127.0.0.1:41306/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:40024 | |
Local directory: /tmp/dask-worker-space/worker-631k2h67 |
Comm: tcp://127.0.0.1:37950 | Total threads: 8 |
Dashboard: http://127.0.0.1:41092/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:45242 | |
Local directory: /tmp/dask-worker-space/worker-v00imh3e |
Comm: tcp://127.0.0.1:34362 | Total threads: 8 |
Dashboard: http://127.0.0.1:40109/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:45849 | |
Local directory: /tmp/dask-worker-space/worker-r5un_h_k |
Comm: tcp://127.0.0.1:45772 | Total threads: 8 |
Dashboard: http://127.0.0.1:40834/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:33904 | |
Local directory: /tmp/dask-worker-space/worker-qp9pm94l |
Comm: tcp://127.0.0.1:40878 | Total threads: 8 |
Dashboard: http://127.0.0.1:45583/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:44050 | |
Local directory: /tmp/dask-worker-space/worker-kjpy8uvi |
Comm: tcp://127.0.0.1:33895 | Total threads: 8 |
Dashboard: http://127.0.0.1:41977/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:42735 | |
Local directory: /tmp/dask-worker-space/worker-rpt_gc3w |
Comm: tcp://127.0.0.1:44948 | Total threads: 8 |
Dashboard: http://127.0.0.1:33496/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:34246 | |
Local directory: /tmp/dask-worker-space/worker-q2jtfst0 |
Comm: tcp://127.0.0.1:44332 | Total threads: 8 |
Dashboard: http://127.0.0.1:38420/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:37403 | |
Local directory: /tmp/dask-worker-space/worker-4xjw44dr |
Comm: tcp://127.0.0.1:41471 | Total threads: 8 |
Dashboard: http://127.0.0.1:43586/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:34927 | |
Local directory: /tmp/dask-worker-space/worker-0ql8jwfe |
Comm: tcp://127.0.0.1:38720 | Total threads: 8 |
Dashboard: http://127.0.0.1:42719/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:36798 | |
Local directory: /tmp/dask-worker-space/worker-hgz4up2o |
Comm: tcp://127.0.0.1:34021 | Total threads: 8 |
Dashboard: http://127.0.0.1:38184/status | Memory: 15.69 GiB |
Nanny: tcp://127.0.0.1:33703 | |
Local directory: /tmp/dask-worker-space/worker-4rf116r_ |
df=load.controlfile(control)
#Take out 'later' tagged computations
#df=df[~df['Value'].str.contains('later')]
df
Value | Inputs | Equation | Zone | Plot | Colourmap | MinMax | Unit | Oldname | Unnamed: 10 | |
---|---|---|---|---|---|---|---|---|---|---|
IceClim | calc.IceClim_load(data,nc_outputpath) | ALL | IceClim | Spectral | (0,5) | m | M-4 |
Each computation consists of
%%time
import os
calcswitch=os.environ.get('calc', 'True')
lazy=os.environ.get('lazy','False' )
loaddata=((df.Inputs != '').any())
print('calcswitch=',calcswitch,'df.Inputs != nothing',loaddata, 'lazy=',lazy)
data = load.datas(catalog_url,df.Inputs,month,year,daskreport,lazy=lazy) if ((calcswitch=='True' )*loaddata) else 0
data
calcswitch= True df.Inputs != nothing False lazy= True CPU times: user 430 µs, sys: 0 ns, total: 430 µs Wall time: 407 µs
0
%%time
monitor.auto(df,data,savefig,daskreport,outputpath,file_exp='SEDNA'
)
#calc= True #save= False #plot= True Value='IceClim' Zone='ALL' Plot='IceClim' cmap='Spectral' clabel='m' clim= (0, 5) outputpath='../results/SEDNA_DELTA_MONITOR/' nc_outputpath='../nc_results/SEDNA_DELTA_MONITOR/' filename='SEDNA_IceClim_ALL_IceClim' #3 Start computing data= calc.IceClim_load(data,nc_outputpath) monitor.optimize_dataset(data) start saving data filename= ../nc_results/SEDNA_DELTA_MONITOR/SEDNA_maps_ALL_IceConce/t_*/y_*/x_*.nc dim ('x', 'y', 't') load computed data completed start saving data filename= ../nc_results/SEDNA_DELTA_MONITOR/SEDNA_maps_ALL_IceThickness/t_*/y_*/x_*.nc dim ('x', 'y', 't') load computed data completed add optimise here once otimise can recognise
<xarray.Dataset> Dimensions: (t: 61, y: 6540, x: 6560) Coordinates: * t (t) object 2012-03-01 12:00:00 ... 2012-04-30 12:00:00 * y (y) int64 1 2 3 4 5 6 7 ... 6535 6536 6537 6538 6539 6540 * x (x) int64 1 2 3 4 5 6 7 ... 6555 6556 6557 6558 6559 6560 nav_lat (y, x) float32 dask.array<chunksize=(130, 6560), meta=np.ndarray> nav_lon (y, x) float32 dask.array<chunksize=(130, 6560), meta=np.ndarray> time_centered (t) object dask.array<chunksize=(31,), meta=np.ndarray> mask2d (y, x) bool dask.array<chunksize=(130, 6560), meta=np.ndarray> Data variables: siconc (t, y, x) float32 dask.array<chunksize=(31, 130, 6560), meta=np.ndarray> sivolu (t, y, x) float32 dask.array<chunksize=(31, 130, 6560), meta=np.ndarray>
#5 Plotting filename= plots.IceClim(data,path=outputpath,filename=filename,save=savefig,cmap=cmap,clim=clim,clabel=clabel) ../results/SEDNA_DELTA_MONITOR/SEDNA_IceClim_ALL_IceClim_20120301-20120430.html starts plotting
/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/geoviews/operation/projection.py:99: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry. if proj_geom.geom_type == 'GeometryCollection' and len(proj_geom) == 0: /ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/geoviews/operation/projection.py:99: ShapelyDeprecationWarning: __len__ for multi-part geometries is deprecated and will be removed in Shapely 2.0. Check the length of the `geoms` property instead to get the number of parts of a multi-part geometry. if proj_geom.geom_type == 'GeometryCollection' and len(proj_geom) == 0:
../results/SEDNA_DELTA_MONITOR/SEDNA_IceClim_ALL_IceClim_20120301-20120430.html created
CPU times: user 35min 19s, sys: 9min 12s, total: 44min 31s Wall time: 43min 43s