What you could do, at least for this example, is create a single job (call it "Job A") that contains all of your "little settings" like flow, conditions, etc. This job can have a job variable for the URL and another one for the specific text you want to search for. Then, create a separate job for each of the dozen sites you have ; each of these jobs would simply execute Job A with different parameters passed into the job variables. If some of these sites have, say, different DNS timeouts, then you could also pass that into Job A as a job variable.
If you were to execute them all at the same time though, you may run into issues with pulling back whatever results you need from Job A back to the calling job (eg of both Job B and Job C execute at the same time, and therefore call Job A at the same time, and then try to reference Job A's variables, I'm not sure if Job C would get some results that actually pertain to Job B, or something like that). To combat this, what you could do is pass a Run ID of some sort (eg the calling job name plus the current timestamp, "Job_B_20190430132301"). Have Job A export the results into a text file named based on the Run ID, and have the calling job read the results from that text file. If the results are "good", you could then delete the text file.
Just a thought. I'm not sure if this is the best way to handle this or not, but it should work.
Edited by user
2019-04-30T17:27:26Z
|
Reason: Not specified