Hi Gary
Thank you for the comments:
We download a zip-file, unpack it, and process in the above mentioned loop. The job is triggered by the zip being released at a remote site, but there is only one triggering file = 1 run and no conflicts there.
Memory usage is stable at ~2,5G out of 8G during all tasks. It is a dual CPU virtual Windows host and CPU fluctuates between ~3% and ~50%. I am running the job right now and only very few other jobs are run at this hour.
Forgot to mention that I of course could pull the copy outside the loop (and may very well end up doing just that), but as the file content is not always correct errors occur. When the processed file is archived inside the loop it is very easy to remove/correct and start the job again. If the files were copied after the loop and say the error occured on file 35.000 out of 70.000 I would have to move the 35.000 processed files manually before restarting the job. Even though I order the output there would still be a risk that I move an unprocessed file.
Edited by user
2019-01-14T15:27:38Z
|
Reason: Not specified