Please note that VisualCron support is not actively monitoring this community forum. Please use our contact page for contacting the VisualCron support directly.


Knighty
2019-05-14T21:35:34Z
Hi

We have over 1000 jobs running in VisualCron 8.5.0 pro (clean install) in AWS on a c5.xLarge (xeon 3.0ghz, 4 vcpu cores, 8gb mem - normally 20-40% cpu load) instance, is it possible to upscale the backend database server as sometimes it takes quite a while to open the jobs or even retrieve the list of grouped jobs from the back end using a local login session VCClient connection (same machine as the server)

Also can i confirm if the webclient is broken in 8.5.0 pro as it just will not run even though everything looks right, it just delivers empty pages

<html>
<head>
</head>
<body>
</body>
</html>


thanks

andy @ informa
Sponsor
Forum information
Support
2019-05-15T08:10:48Z
Please upgrade to 8.5.1 regarding the Web client. If you still want to use 8.5.0 you can download the web server fix manually.

I am not sure if the problem is the large number of Jobs but rather what they do at the same time. We are currently working on a Task manager where you will be able to see CPU usage per Job/Task. This might help understanding what is using the CPU in your case.
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Knighty
2019-05-15T09:00:05Z
is there no way of limiting the resources the system can utilize? like say max cpu usage = 90% or processor affinity so you could leave one core alone?

our problem doesnt seem to be directly processor related, its data retrieval while filtering or viewing jobs window - which is why i asked if the backend could be upscaled to a beefier delivery system, is it using SQL Compact Edition?

Thanks

Andy
Knighty
2019-05-15T09:03:32Z
Regarding the webclient, i found a link in another post to download WebClient.7z which worked, is there a direct login mechanism available rather than having to OAuth ring from yahoo, google, facebook or Live id ?

we are in a corp environment and none of these options are suitable?

Thanks

Andy

not sure if there's something wrong - but correctly using the Microsoft (LiveId) auth ring yields

Microsoft account is unavailable
Microsoft account is unavailable from this site, so you can't sign in or sign up. The site may be experiencing a problem.

You can sign in or sign up at other Microsoft sites and services, or try again later at this site.
Knighty
2019-05-15T21:30:22Z
On the next version, could you add a delay/debounce/throttle on search filtering as when you have a lot of jobs - it takes a very long time to return results from every character as it is entered, searching for `error` performs 5 searches and freezes the UI while collating the results...

thanks
Support
2019-05-27T12:57:27Z
Originally Posted by: Knighty 

is there no way of limiting the resources the system can utilize? like say max cpu usage = 90% or processor affinity so you could leave one core alone?

our problem doesnt seem to be directly processor related, its data retrieval while filtering or viewing jobs window - which is why i asked if the backend could be upscaled to a beefier delivery system, is it using SQL Compact Edition?

Thanks

Andy



But you can use External system - I think that is the best solution for you.
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Support
2019-05-27T12:57:58Z
Originally Posted by: Knighty 

Regarding the webclient, i found a link in another post to download WebClient.7z which worked, is there a direct login mechanism available rather than having to OAuth ring from yahoo, google, facebook or Live id ?

we are in a corp environment and none of these options are suitable?

Thanks

Andy

not sure if there's something wrong - but correctly using the Microsoft (LiveId) auth ring yields

Microsoft account is unavailable
Microsoft account is unavailable from this site, so you can't sign in or sign up. The site may be experiencing a problem.

You can sign in or sign up at other Microsoft sites and services, or try again later at this site.



Why cannot you use the AD auth?
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Support
2019-05-27T12:58:38Z
Originally Posted by: Knighty 

On the next version, could you add a delay/debounce/throttle on search filtering as when you have a lot of jobs - it takes a very long time to return results from every character as it is entered, searching for `error` performs 5 searches and freezes the UI while collating the results...

thanks



I think something might be wrong with your system if you have these kind of bottlenecks. Have you tried to connect with your Client on your desktop to the remote server and filter the same?
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Jason Schaitel (WV)
2019-05-29T22:26:14Z
Have you tried offloading all your logging to an external database? That way only VC system internals should be putting load on your SQL Compact Edition file, such as UI job filtering. I can only imagine the amount of logging that many jobs generates.

We direct all our logging data to a SQL Server 2016 database and it works pretty good. Worth considering if you are not already doing that.
Knighty
2019-07-01T16:57:00Z
Originally Posted by: Support 

Originally Posted by: Knighty 

On the next version, could you add a delay/debounce/throttle on search filtering as when you have a lot of jobs - it takes a very long time to return results from every character as it is entered, searching for `error` performs 5 searches and freezes the UI while collating the results...

thanks



I think something might be wrong with your system if you have these kind of bottlenecks. Have you tried to connect with your Client on your desktop to the remote server and filter the same?



external client logins filter even slower as it returns results for each character entered in the filter...

Its the weight of the database, we have logging set to an external SQL Enterprise edition DB, that's fine.

Were using Visual Cron on an CX5.XL Amazon instance which is costing thousands per month in order to offer a "reasonable" level of performance.

When filtering Jobs by name, the first letter typed causes a delay of 30-50 secs depending on server load at the time, second letter is slightly faster, 5th or 6 letter usually isn't slow. this speed seems only slightly affected by load on the server by running tasks.

SQL Compact edition is a great way to get up and running, but for a heavily utilized production system that is responsible for 80% of data ingestion, its a little slow once you head upward of 500 jobs, this one has 1100 jobs within and could do with a bigger backend.

Is there any optimization i could perform on the DB - apart from splitting 50% of the jobs out to a second VC server?
Support
2019-07-02T18:01:04Z
Originally Posted by: Knighty 

Originally Posted by: Support 

Originally Posted by: Knighty 

On the next version, could you add a delay/debounce/throttle on search filtering as when you have a lot of jobs - it takes a very long time to return results from every character as it is entered, searching for `error` performs 5 searches and freezes the UI while collating the results...

thanks



I think something might be wrong with your system if you have these kind of bottlenecks. Have you tried to connect with your Client on your desktop to the remote server and filter the same?



external client logins filter even slower as it returns results for each character entered in the filter...

Its the weight of the database, we have logging set to an external SQL Enterprise edition DB, that's fine.

Were using Visual Cron on an CX5.XL Amazon instance which is costing thousands per month in order to offer a "reasonable" level of performance.

When filtering Jobs by name, the first letter typed causes a delay of 30-50 secs depending on server load at the time, second letter is slightly faster, 5th or 6 letter usually isn't slow. this speed seems only slightly affected by load on the server by running tasks.

SQL Compact edition is a great way to get up and running, but for a heavily utilized production system that is responsible for 80% of data ingestion, its a little slow once you head upward of 500 jobs, this one has 1100 jobs within and could do with a bigger backend.

Is there any optimization i could perform on the DB - apart from splitting 50% of the jobs out to a second VC server?



What is important in this case is that the Client computer is fast as the sorting does not rely on any database at all but just internal memory, cpu and graphics rendering. So if you can connect remotely from your desktop you will probably get higher performance.
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Knighty
2019-07-02T20:19:42Z
Hi

In this case - that is questionable, the amount of data that needs to be transferred from SQLCE to the client remotely is prohibitive of any performance gains over local client?

if the search was to begin on the 2nd-3rd key-press in the input then the results would be better refined (i.e. smaller) by the time the user has inputted the search they're looking to fulfil.
This would probably not be ideal for all, so maybe making it configurable in the settings dialogues would be appropriate?

I still feel that SQLCE is not great under a lot of load, with over 1,000 jobs averaging 12 steps per job.

We're running VC on an AWS c4.4xlarge
vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance
16 30 EBS-Only 2,000 High

anyway, we have found that if we reset the VCron server frequently, we can keep search delays within reasonable times.
Support
2019-07-02T20:24:55Z
Originally Posted by: Knighty 

Hi

In this case - that is questionable, the amount of data that needs to be transferred from SQLCE to the client remotely is prohibitive of any performance gains over local client?

if the search was to begin on the 2nd-3rd key-press in the input then the results would be better refined (i.e. smaller) by the time the user has inputted the search they're looking to fulfil.
This would probably not be ideal for all, so maybe making it configurable in the settings dialogues would be appropriate?

I still feel that SQLCE is not great under a lot of load, with over 1,000 jobs averaging 12 steps per job.

We're running VC on an AWS c4.4xlarge
vCPU* Mem (GiB) Storage Dedicated EBS Bandwidth (Mbps) Network Performance
16 30 EBS-Only 2,000 High

anyway, we have found that if we reset the VCron server frequently, we can keep search delays within reasonable times.



First, the communication is never done between Client and SQLCE. It is sent through named pipes (if connection locally) between Client and Server. This is very fast. It is the Server that is connected to SQLCE. The only time a lot of information can come from SQLCE to Server to Client is when you request log history.

Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Support
2019-07-02T20:25:59Z
To be clear again. When searching in the main window no database interaction occurs. Only in local memory of the Client. Not even any communication between Client and Server. Just sorting and filtering in-memory of the Client.
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Support
2019-07-02T20:52:01Z
Here is a video of me fast filtering 686 Jobs. I doubt 1000 Jobs will take longer:

https://www.screencast.com/t/JeF95d5CNB 

We could test with your specific Jobs if you export and send them to support@visualcron.com but I think it comes down to the cpu, memory and gpu of the Client computer.
Henrik
Support
http://www.visualcron.com 
Please like  VisualCron on facebook!
Scroll to Top