We have a set of extracts that generate very large files that take forever to write to the network. I'm looking for tips on doing it differently.
The jobs have 3 tasks. Read SQL code from file/execute Oracle SQL/write output to file.
Some files contain a very large number of records (2.5 million) and these take a couple of hours to finish. I can run the same queries in Toad and they run in no time, thus I suspect the issue is somewhere in the SQL caching the output or the write task reading that and writing. It's been running over an hour right now and still hasn't started writing to the network drive.
Has anyone else overcome a similar issue?
May be related: If have have several of these running at the same time and kill the jobs, we find that the VC program will crash and we get disconnected if we restart the jobs and let them run for a while. VC needs to be manually killed and restarted from the server. Is it possible we have a configuration set wrong and are using memory incorrectly or are not clearing memory on a crash? If you can point me to the right place to look or troubleshoot I'd be grateful.
Gary
VC 8.2.8 on Windows 7 64 bit clients and Windows server 2012 VMs with 64GB memory
Update: Naturally right after I posted this a coworker showed me the "clear memory" button on the server info screen, which we will now do after killing jobs/tasks if we have to. Hopefully that is best practice for that scenario.
Edited by user
2017-10-18T19:55:49Z
|
Reason: Not specified