%matplotlib inline
import pandas as pd
import socket
host = socket.getfqdn()
from core import load, zoom, calc, save,plots,monitor
#reload funcs after updating ./core/*.py
import importlib
importlib.reload(load)
importlib.reload(zoom)
importlib.reload(calc)
importlib.reload(save)
importlib.reload(plots)
importlib.reload(monitor)
<module 'core.monitor' from '/ccc/work/cont003/gen7420/talandel/TOOLS/monitor-sedna/notebook/core/monitor.py'>
If you submit the job with job scheduler; below are list of enviroment variable one can pass
local : if True run dask local cluster, if not true, put number of workers setted in the 'local' if no 'local ' given, local will be setted automatically to 'True'
%env ychunk='2', #%env tchunk='2'
controls chunk. 'False' sets no modification from original netcdf file's chunk.
ychunk=10 will group the original netcdf file to 10 by 10
tchunk=1 will chunk the time coordinate one by one
%env file_exp=
'file_exp': Which 'experiment' name is it? this corresopnds to intake catalog name without path and .yaml
#%env year=
for Validation, this correspoinds to path/year/month 's year for monitoring, this corresponids to 'date' having means do all files in the monitoring directory setting it as 0[0-9] &1[0-9]& [2-3][0-9], the job can be separated in three lots. For DELTA experiment, year corresponds to really 'year'
%env month=
for monitoring this corresponds to file path path-XIOS.{month}/
For DELTA experiment, year corresponds to really 'month'
proceed saving? True or False , Default is setted as True
proceed plotting? True or False , Default is setted as True
proceed computation? or just load computed result? True or False , Default is setted as True
save output file used for plotting
using kerchunked file -> False, not using kerhcunk -> True
name of control file to be used for computation/plots/save/ We have number of M_xxx.csv
Monitor.sh calls M_MLD_2D
and AWTD.sh, Fluxnet.sh, Siconc.sh, IceClim.sh, FWC_SSH.sh, Integrals.sh , Sections.sh
M_AWTMD
M_Fluxnet
M_Ice_quantities
M_IceClim M_IceConce M_IceThick
M_FWC_2D M_FWC_integrals M_FWC_SSH M_SSH_anomaly
M_Mean_temp_velo M_Mooring
M_Sectionx M_Sectiony
%%time
# 'savefig': Do we save output in html? or not. keep it true.
savefig=True
client,cluster,control,catalog_url,month,year,daskreport,outputpath = load.set_control(host)
!mkdir -p $outputpath
!mkdir -p $daskreport
client
local True using host= irene5028.c-irene.mg1.tgcc.ccc.cea.fr starting dask cluster on local= True workers 16 10000000000 rome local cluster starting This code is running on irene5028.c-irene.mg1.tgcc.ccc.cea.fr using SEDNA_DELTA_MONITOR file experiment, read from ../lib/SEDNA_DELTA_MONITOR.yaml on year= 2015 on month= 01 outputpath= ../results/SEDNA_DELTA_MONITOR/ daskreport= ../results/dask/7449016irene5028.c-irene.mg1.tgcc.ccc.cea.fr_SEDNA_DELTA_MONITOR_01M_FWC_2D/ CPU times: user 484 ms, sys: 116 ms, total: 600 ms Wall time: 17.5 s
Client-95bacd1d-6f42-11ed-a02f-080038b93e55
Connection method: Cluster object | Cluster type: distributed.LocalCluster |
Dashboard: http://127.0.0.1:8787/status |
3e1e72b1
Dashboard: http://127.0.0.1:8787/status | Workers: 16 |
Total threads: 128 | Total memory: 221.88 GiB |
Status: running | Using processes: True |
Scheduler-2a1ac358-36d9-4d90-a064-b1b7a0830425
Comm: tcp://127.0.0.1:36670 | Workers: 16 |
Dashboard: http://127.0.0.1:8787/status | Total threads: 128 |
Started: Just now | Total memory: 221.88 GiB |
Comm: tcp://127.0.0.1:32999 | Total threads: 8 |
Dashboard: http://127.0.0.1:46632/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:46350 | |
Local directory: /tmp/dask-worker-space/worker-4c7g8hnd |
Comm: tcp://127.0.0.1:38166 | Total threads: 8 |
Dashboard: http://127.0.0.1:42617/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:35846 | |
Local directory: /tmp/dask-worker-space/worker-0njtl1mg |
Comm: tcp://127.0.0.1:36038 | Total threads: 8 |
Dashboard: http://127.0.0.1:38965/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:42085 | |
Local directory: /tmp/dask-worker-space/worker-mffzj088 |
Comm: tcp://127.0.0.1:40104 | Total threads: 8 |
Dashboard: http://127.0.0.1:39615/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:44883 | |
Local directory: /tmp/dask-worker-space/worker-_3tg4ex8 |
Comm: tcp://127.0.0.1:46448 | Total threads: 8 |
Dashboard: http://127.0.0.1:39143/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:38242 | |
Local directory: /tmp/dask-worker-space/worker-359dabp7 |
Comm: tcp://127.0.0.1:38480 | Total threads: 8 |
Dashboard: http://127.0.0.1:38252/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:44534 | |
Local directory: /tmp/dask-worker-space/worker-ud0e_bfi |
Comm: tcp://127.0.0.1:45000 | Total threads: 8 |
Dashboard: http://127.0.0.1:43214/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:44166 | |
Local directory: /tmp/dask-worker-space/worker-iwk46lm4 |
Comm: tcp://127.0.0.1:42914 | Total threads: 8 |
Dashboard: http://127.0.0.1:37290/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:37565 | |
Local directory: /tmp/dask-worker-space/worker-h34n_973 |
Comm: tcp://127.0.0.1:35069 | Total threads: 8 |
Dashboard: http://127.0.0.1:43802/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:43982 | |
Local directory: /tmp/dask-worker-space/worker-bvq12a0e |
Comm: tcp://127.0.0.1:43455 | Total threads: 8 |
Dashboard: http://127.0.0.1:44097/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:41934 | |
Local directory: /tmp/dask-worker-space/worker-lf44agl9 |
Comm: tcp://127.0.0.1:37613 | Total threads: 8 |
Dashboard: http://127.0.0.1:45725/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:37747 | |
Local directory: /tmp/dask-worker-space/worker-cjrrb3j6 |
Comm: tcp://127.0.0.1:39847 | Total threads: 8 |
Dashboard: http://127.0.0.1:45552/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:41483 | |
Local directory: /tmp/dask-worker-space/worker-sgq70c_x |
Comm: tcp://127.0.0.1:37485 | Total threads: 8 |
Dashboard: http://127.0.0.1:46741/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:36916 | |
Local directory: /tmp/dask-worker-space/worker-_cccayng |
Comm: tcp://127.0.0.1:42862 | Total threads: 8 |
Dashboard: http://127.0.0.1:34686/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:37503 | |
Local directory: /tmp/dask-worker-space/worker-bpgegl0p |
Comm: tcp://127.0.0.1:41743 | Total threads: 8 |
Dashboard: http://127.0.0.1:41195/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:45814 | |
Local directory: /tmp/dask-worker-space/worker-w_p_vyjn |
Comm: tcp://127.0.0.1:33999 | Total threads: 8 |
Dashboard: http://127.0.0.1:33208/status | Memory: 13.87 GiB |
Nanny: tcp://127.0.0.1:46530 | |
Local directory: /tmp/dask-worker-space/worker-bvy7ftdf |
df=load.controlfile(control)
#Take out 'later' tagged computations
#df=df[~df['Value'].str.contains('later')]
df
Value | Inputs | Equation | Zone | Plot | Colourmap | MinMax | Unit | Oldname | Unnamed: 10 | |
---|---|---|---|---|---|---|---|---|---|---|
FWC_2D | gridS.vosaline,param.mask,param.e3t,param.e1te2t | calc.FWC2D_UFUNC(data) | BBFG | maps | Spectral_r | (0,24) | m | S-1 |
Each computation consists of
%%time
import os
calcswitch=os.environ.get('calc', 'True')
lazy=os.environ.get('lazy','False' )
loaddata=((df.Inputs != '').any())
print('calcswitch=',calcswitch,'df.Inputs != nothing',loaddata, 'lazy=',lazy)
data = load.datas(catalog_url,df.Inputs,month,year,daskreport,lazy=lazy) if ((calcswitch=='True' )*loaddata) else 0
data
calcswitch= True df.Inputs != nothing True lazy= False ../lib/SEDNA_DELTA_MONITOR.yaml using param_xios reading ../lib/SEDNA_DELTA_MONITOR.yaml using param_xios reading <bound method DataSourceBase.describe of sources: param_xios: args: combine: nested concat_dim: y urlpath: /ccc/work/cont003/gen7420/odakatin/CONFIGS/SEDNA/SEDNA-I/SEDNA_Domain_cfg_Tgt_20210423_tsh10m_L1/param_f32/x_*.nc xarray_kwargs: compat: override coords: minimal data_vars: minimal parallel: true description: SEDNA NEMO parameters from MPI output nav_lon lat fails driver: intake_xarray.netcdf.NetCDFSource metadata: catalog_dir: /ccc/work/cont003/gen7420/talandel/TOOLS/monitor-sedna/notebook/../lib/ > {'name': 'param_xios', 'container': 'xarray', 'plugin': ['netcdf'], 'driver': ['netcdf'], 'description': 'SEDNA NEMO parameters from MPI output nav_lon lat fails', 'direct_access': 'forbid', 'user_parameters': [{'name': 'path', 'description': 'file coordinate', 'type': 'str', 'default': '/ccc/work/cont003/gen7420/odakatin/CONFIGS/SEDNA/MESH/SEDNA_mesh_mask_Tgt_20210423_tsh10m_L1/param'}], 'metadata': {}, 'args': {'urlpath': '/ccc/work/cont003/gen7420/odakatin/CONFIGS/SEDNA/SEDNA-I/SEDNA_Domain_cfg_Tgt_20210423_tsh10m_L1/param_f32/x_*.nc', 'combine': 'nested', 'concat_dim': 'y'}} 0 read gridS ['vosaline'] lazy= False using load_data_xios_kerchunk reading gridS using load_data_xios_kerchunk reading <bound method DataSourceBase.describe of sources: data_xios_kerchunk: args: consolidated: false storage_options: fo: file:////ccc/cont003/home/ra5563/ra5563/catalogue/DELTA/201501/gridS_0[0-5][0-9][0-9].json target_protocol: file urlpath: reference:// description: CREG025 NEMO outputs from different xios server in kerchunk format driver: intake_xarray.xzarr.ZarrSource metadata: catalog_dir: /ccc/work/cont003/gen7420/talandel/TOOLS/monitor-sedna/notebook/../lib/ > took 66.23906087875366 seconds 0 merging gridS ['vosaline'] param nav_lat will be included in data param mask2d will be included in data param nav_lon will be included in data param mask will be included in data param e3t will be included in data param e1te2t will be included in data ychunk= 10 calldatas_y_rechunk sum_num (13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12, 12) start rechunking with (130, 122, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 120, 48) end of y_rechunk CPU times: user 22.4 s, sys: 3.98 s, total: 26.4 s Wall time: 1min 26s
<xarray.Dataset> Dimensions: (t: 31, z: 150, y: 6540, x: 6560) Coordinates: * t (t) object 2015-01-01 12:00:00 ... 2015-01-31 12:00:00 * y (y) int64 1 2 3 4 5 6 7 8 ... 6534 6535 6536 6537 6538 6539 6540 * x (x) int64 1 2 3 4 5 6 7 8 ... 6554 6555 6556 6557 6558 6559 6560 * z (z) int64 1 2 3 4 5 6 7 8 9 ... 143 144 145 146 147 148 149 150 nav_lat (y, x) float32 dask.array<chunksize=(130, 6560), meta=np.ndarray> mask2d (y, x) bool dask.array<chunksize=(130, 6560), meta=np.ndarray> nav_lon (y, x) float32 dask.array<chunksize=(130, 6560), meta=np.ndarray> mask (z, y, x) bool dask.array<chunksize=(150, 130, 6560), meta=np.ndarray> e3t (z, y, x) float64 dask.array<chunksize=(150, 130, 6560), meta=np.ndarray> e1te2t (y, x) float64 dask.array<chunksize=(130, 6560), meta=np.ndarray> Data variables: vosaline (t, z, y, x) float32 dask.array<chunksize=(1, 150, 130, 6560), meta=np.ndarray> Attributes: (12/26) CASE: DELTA CONFIG: SEDNA Conventions: CF-1.6 DOMAIN_dimensions_ids: [2, 3] DOMAIN_halo_size_end: [0, 0] DOMAIN_halo_size_start: [0, 0] ... ... nj: 13 output_frequency: 1d start_date: 20090101 timeStamp: 2022-Jul-21 16:35:22 GMT title: ocean T grid variables uuid: 9aef3543-35d6-4da0-a58a-f2c75b69d3a7
%%time
monitor.auto(df,data,savefig,daskreport,outputpath,file_exp='SEDNA'
)
#calc= True #save= True #plot= False Value='FWC_2D' Zone='BBFG' Plot='maps' cmap='Spectral_r' clabel='m' clim= (0, 24) outputpath='../results/SEDNA_DELTA_MONITOR/' nc_outputpath='../nc_results/SEDNA_DELTA_MONITOR/' filename='SEDNA_maps_BBFG_FWC_2D' data=monitor.optimize_dataset(data) #2 Zooming Data data= zoom.BBFG(data) data=monitor.optimize_dataset(data)
<xarray.Dataset> Dimensions: (t: 31, z: 150, y: 5264, x: 6560) Coordinates: * t (t) object 2015-01-01 12:00:00 ... 2015-01-31 12:00:00 * y (y) int64 1277 1278 1279 1280 1281 ... 6536 6537 6538 6539 6540 * x (x) int64 1 2 3 4 5 6 7 8 ... 6554 6555 6556 6557 6558 6559 6560 * z (z) int64 1 2 3 4 5 6 7 8 9 ... 143 144 145 146 147 148 149 150 nav_lat (y, x) float32 dask.array<chunksize=(56, 6560), meta=np.ndarray> mask2d (y, x) bool dask.array<chunksize=(56, 6560), meta=np.ndarray> nav_lon (y, x) float32 dask.array<chunksize=(56, 6560), meta=np.ndarray> mask (z, y, x) bool dask.array<chunksize=(150, 56, 6560), meta=np.ndarray> e3t (z, y, x) float64 dask.array<chunksize=(150, 56, 6560), meta=np.ndarray> e1te2t (y, x) float64 dask.array<chunksize=(56, 6560), meta=np.ndarray> Data variables: vosaline (t, z, y, x) float32 dask.array<chunksize=(1, 150, 56, 6560), meta=np.ndarray> Attributes: (12/26) CASE: DELTA CONFIG: SEDNA Conventions: CF-1.6 DOMAIN_dimensions_ids: [2, 3] DOMAIN_halo_size_end: [0, 0] DOMAIN_halo_size_start: [0, 0] ... ... nj: 13 output_frequency: 1d start_date: 20090101 timeStamp: 2022-Jul-21 16:35:22 GMT title: ocean T grid variables uuid: 9aef3543-35d6-4da0-a58a-f2c75b69d3a7
#3 Start computing data= calc.FWC2D_UFUNC(data) monitor.optimize_dataset(data) add optimise here once otimise can recognise
<xarray.Dataset> Dimensions: (t: 31, y: 5264, x: 6560) Coordinates: * t (t) object 2015-01-01 12:00:00 ... 2015-01-31 12:00:00 * y (y) int64 1277 1278 1279 1280 1281 ... 6536 6537 6538 6539 6540 * x (x) int64 1 2 3 4 5 6 7 8 ... 6554 6555 6556 6557 6558 6559 6560 nav_lat (y, x) float32 dask.array<chunksize=(56, 6560), meta=np.ndarray> mask2d (y, x) bool dask.array<chunksize=(56, 6560), meta=np.ndarray> nav_lon (y, x) float32 dask.array<chunksize=(56, 6560), meta=np.ndarray> e1te2t (y, x) float64 dask.array<chunksize=(56, 6560), meta=np.ndarray> Data variables: FWC2D (t, y, x) float32 dask.array<chunksize=(1, 56, 6560), meta=np.ndarray>
#4 Saving SEDNA_maps_BBFG_FWC_2D data=save.datas(data,plot=Plot,path=nc_outputpath,filename=filename) start saving data saving data in a file t (1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 slice(0, 1, None) slice(1, 2, None)
2022-11-28 18:38:18,623 - distributed.worker_memory - WARNING - Worker is at 81% memory usage. Pausing worker. Process memory: 11.36 GiB -- Worker memory limit: 13.87 GiB 2022-11-28 18:38:24,593 - distributed.worker_memory - WARNING - Worker tcp://127.0.0.1:37485 (pid=41115) exceeded 95% memory budget. Restarting... 2022-11-28 18:38:25,435 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:35069 -> tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 223, in read frames_nbytes = await stream.read_bytes(fmt_size) tornado.iostream.StreamClosedError: Stream is closed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1692, in get_data response = await comm.read(deserializers=serializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:35069 remote=tcp://127.0.0.1:53384>: Stream is closed 2022-11-28 18:38:25,435 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 223, in read frames_nbytes = await stream.read_bytes(fmt_size) tornado.iostream.StreamClosedError: Stream is closed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1983, in gather_dep response = await get_data_from_worker( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2725, in get_data_from_worker return await retry_operation(_get_data, operation="get_data_from_worker") File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 383, in retry_operation return await retry( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 368, in retry return await coro() File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2705, in _get_data response = await send_recv( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/core.py", line 918, in send_recv response = await comm.read(deserializers=deserializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:42528 remote=tcp://127.0.0.1:37485>: Stream is closed 2022-11-28 18:38:25,435 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:37613 -> tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 223, in read frames_nbytes = await stream.read_bytes(fmt_size) tornado.iostream.StreamClosedError: Stream is closed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1692, in get_data response = await comm.read(deserializers=serializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:37613 remote=tcp://127.0.0.1:54158>: Stream is closed 2022-11-28 18:38:25,435 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/tornado/iostream.py", line 867, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/tornado/iostream.py", line 1140, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1983, in gather_dep response = await get_data_from_worker( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2725, in get_data_from_worker return await retry_operation(_get_data, operation="get_data_from_worker") File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 383, in retry_operation return await retry( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 368, in retry return await coro() File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2705, in _get_data response = await send_recv( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/core.py", line 918, in send_recv response = await comm.read(deserializers=deserializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:42504 remote=tcp://127.0.0.1:37485>: ConnectionResetError: [Errno 104] Connection reset by peer 2022-11-28 18:38:25,438 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/tornado/iostream.py", line 867, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/tornado/iostream.py", line 1140, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1983, in gather_dep response = await get_data_from_worker( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2725, in get_data_from_worker return await retry_operation(_get_data, operation="get_data_from_worker") File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 383, in retry_operation return await retry( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 368, in retry return await coro() File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2705, in _get_data response = await send_recv( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/core.py", line 918, in send_recv response = await comm.read(deserializers=deserializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:42518 remote=tcp://127.0.0.1:37485>: ConnectionResetError: [Errno 104] Connection reset by peer 2022-11-28 18:38:25,446 - distributed.nanny - WARNING - Restarting worker 2022-11-28 18:38:26,995 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:42914 -> tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 223, in read frames_nbytes = await stream.read_bytes(fmt_size) tornado.iostream.StreamClosedError: Stream is closed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1692, in get_data response = await comm.read(deserializers=serializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:42914 remote=tcp://127.0.0.1:59372>: Stream is closed 2022-11-28 18:38:27,818 - distributed.worker - ERROR - Worker stream died during communication: tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 223, in read frames_nbytes = await stream.read_bytes(fmt_size) tornado.iostream.StreamClosedError: Stream is closed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1983, in gather_dep response = await get_data_from_worker( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2725, in get_data_from_worker return await retry_operation(_get_data, operation="get_data_from_worker") File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 383, in retry_operation return await retry( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/utils_comm.py", line 368, in retry return await coro() File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 2705, in _get_data response = await send_recv( File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/core.py", line 918, in send_recv response = await comm.read(deserializers=deserializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://127.0.0.1:42840 remote=tcp://127.0.0.1:37485>: Stream is closed 2022-11-28 18:38:27,821 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:43455 -> tcp://127.0.0.1:37485 Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 223, in read frames_nbytes = await stream.read_bytes(fmt_size) tornado.iostream.StreamClosedError: Stream is closed The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/worker.py", line 1692, in get_data response = await comm.read(deserializers=serializers) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 239, in read convert_stream_closed_error(self, e) File "/ccc/cont003/home/ra5563/ra5563/monitor/lib/python3.10/site-packages/distributed/comm/tcp.py", line 144, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:43455 remote=tcp://127.0.0.1:52632>: Stream is closed
slice(2, 3, None) slice(3, 4, None) slice(4, 5, None) slice(5, 6, None) slice(6, 7, None)
2022-11-28 18:46:52,358 - distributed.worker_memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 11.14 GiB -- Worker memory limit: 13.87 GiB 2022-11-28 18:47:07,091 - distributed.worker_memory - WARNING - Worker is at 59% memory usage. Resuming worker. Process memory: 8.28 GiB -- Worker memory limit: 13.87 GiB
slice(7, 8, None) slice(8, 9, None) slice(9, 10, None) slice(10, 11, None) slice(11, 12, None) slice(12, 13, None) slice(13, 14, None) slice(14, 15, None) slice(15, 16, None) slice(16, 17, None) slice(17, 18, None) slice(18, 19, None) slice(19, 20, None) slice(20, 21, None) slice(21, 22, None) slice(22, 23, None) slice(23, 24, None) slice(24, 25, None) slice(25, 26, None) slice(26, 27, None) slice(27, 28, None) slice(28, 29, None) slice(29, 30, None) slice(30, 31, None) CPU times: user 7min 57s, sys: 1min 32s, total: 9min 30s Wall time: 46min 5s