Wednesday, March 21, 2012

Jobs migration between agents

The easiest way to migrate jobs / job streams between the agents without any impact and complete invisible for end users is to use composer commands as below:

1. Create a dump containing all the jobs & the job streams from the source agent (S_FTA):
       
./<TWS_home>/bin/composer create /tmp/S_FTA_all_jobstreams.txt sched=S_FTA#@
./<TWS_home>/bin/composer create /tmp/S_FTA_all_jobs.txt jobs=S_FTA#@

2. Rename the source (S_FTA) agent with destination agent (D_FTA) and save the files using a different name (D_FTA_all_jobstreams.txt & D_FTA_all_jobs.txt) so you can restore it in case is needed (you can use any text converter you want).

3. Remove the dependencies using temporary files (if the dependencies exist when importing the job streams we will get an error message and the job stream is not imported, and we need to import them again and again until all the dependencies are met, or it will not work at all if we have a close cycle):
       
cat /tmp/D_FTA_all_jobstreams.txt | grep -v "FOLLOWS" > /tmp/D_FTA_all_jobstreams_no_dependency.txt

4. Import all the jobs:
       
./<TWS_home>/bin/composer add /tmp/D_FTA_all_jobs.txt

5. Import the job streams without dependency:
       
./<TWS_home>/bin/composer add /tmp/D_FTA_all_jobstreams_no_dependency.txt

6. Import the job streams with dependency:
       
./<TWS_home>/bin/composer replace /tmp/D_FTA_all_jobstreams.txt

7. Delete the job and after, the job streams from source agent. The jobs will start to run on the destination agent (D_FTA) starting with the new plan.

Note: If the jobs are scheduled using time zones it may take 48 hours (2 default plans) for a complete move.

Thursday, March 15, 2012

Corrupted FTA Symphony file

To restore the Symphony files when it gets corrupted on any of FTA agent perform the following using the TWS_user:

Method a. (manual restore)

On the FTA with corrupted Symphony: 
1. Change the limit to 0, to prevent any jobs from beeing launched once the new Symphony file is created: 
       
./<TWS_home>/bin/conman "lc;0"

2. Stop all TWS process on the FTA with corrupted Symphony: 
       
./<TWS_home>/bin/conman "shutdown;wait"

3. Delete / Rename the following files:
       
Symphony, Sinfonia, Jobtable, <TWS_home>/*.msg and <TWS_home>/pobox/*.msg

On the TWS MDM: 
4. Unlink all the agents: 
       
./<TWS_home>/bin/conman "unlink @;noask"

5. Stop the TWS MDM agent: 
       
./<TWS_home>/bin/conman "shutdown;wait"

6. Rename Sinfonia file and copy the Symphony file over the Sinfonia.
 
7. Start the TWS MDM agent: 
       
./<TWS_home>/StartUp

8. Link all the agents: 
       
./<TWS_home>/bin/conman "link @;noask"

On the FTA with corrupted Symphony:  
9. Start the agent: 
       
./<TWS_home>/StartUp

On the TWS MDM: 
10. Start & Link the FTA with corrupted Symphony: 
       
./<TWS_home>/bin/conman "start FTA_name & link FTA_name;noask"

At this stage the new Symphony file should be deployed to the FTA with corrupted Symphony, it will be exactly how it was before the corruption occurred, also as the *msg files were remove / delete some jobs / job stream information may be lost, verify the jobs / job streams status to make sure none of them will be ran 2 times. 

11. Change the limit to the original value: 
       
./<TWS_home>/bin/conman "lc;100"


Method b. (semi-automated restore): 
There a script utility developed by IBM for the corrupted Symphony file that can be found HERE

Hint: If there are multiple agents with corrupted Symphony, repeat the above process for each one.


Monday, March 12, 2012

Job distinct / unique / total runs

If a job goes in ABEND and it is not fixed in the same TWS day it will be added to TWS historical database multiple times (as many times as UPDATESTATS jobs run) so the job runs number isn't accurate when we use rep7 to report them, below examples are ran using ORACLE DB.


The below SQL query will remove the duplicate items added by UPDATESTATS for unattended ABEND jobs:
       
alter session set current_schema=maestro_DB_user;

SELECT count(*) FROM (SELECT DISTINCT AJS_NAME, AJB_NAME, JHR_START_TIME, JHR_RUN_DATE, 
JHR_TOTAL_ELAPSED_TIME,JHR_STATUS, WKC_NAME FROM JHR_JOB_HISTORY_RUNS
WHERE jhr_run_date>=trunc(sysdate-12) AND jhr_run_date <trunc(sysdate+1)) 
WHERE (AJB_NAME like 'MAKEPLAN')


Also if you want to see only the unique jobs that ran on a specific workstation:

       
alter session set current_schema=maestro_DB_user;

SELECT count(*) FROM (SELECT UNIQUE AJB_NAME FROM (SELECT DISTINCT AJS_NAME, AJB_NAME, JHR_START_TIME,
JHR_RUN_DATE, JHR_TOTAL_ELAPSED_TIME , JHR_STATUS, WKC_NAME  FROM JHR_JOB_HISTORY_RUNS
WHERE jhr_run_date>=trunc(sysdate-12) AND jhr_run_date <trunc(sysdate+1)) 
WHERE (WKC_NAME like 'TWSMASTER'))



And a nice report containing all job runs, all success runs, all ABEND / CANCEL ones, unique jobs success and unique jobs failed:

       
alter session set current_schema=maestro_DB_user;

SELECT
Count (CASE WHEN JHR_STATUS = 'S' or JHR_STATUS = 'A' 
or JHR_STATUS = 'C' THEN '1' ELSE NULL END) All_Job_Runs,
Count (CASE WHEN JHR_STATUS = 'S' THEN '1' ELSE NULL END) All_Runs_Sucessful,
Count (CASE WHEN JHR_STATUS = 'A' or JHR_STATUS = 'C' THEN '1' ELSE NULL END) All_Runs_Failed,
Count (DISTINCT CASE WHEN JHR_STATUS = 'S' THEN AJB_NAME ELSE NULL END) Unique_Jobs_Successful,
Count (DISTINCT CASE WHEN JHR_STATUS = 'A' 
or JHR_STATUS = 'C' THEN AJB_NAME ELSE NULL END) Unique_Jobs_Failed
FROM (select distinct AJS_NAME, AJB_NAME, JHR_START_TIME, JHR_RUN_DATE, JHR_TOTAL_ELAPSED_TIME,
JHR_STATUS, WKC_NAME from JHR_JOB_HISTORY_RUNS
WHERE jhr_run_date>=trunc(sysdate-12) AND jhr_run_date <trunc(sysdate+1));



Hint: Modify sysdate-+XX to adjust the number of days on which the reports are ran, also you can use dates as: 

       
jhr_run_date>=TO_DATE('10/03/2012','dd/mm/yyyy') AND jhr_run_date <TO_DATE('12/03/2010','dd/mm/yyyy')


Friday, March 9, 2012

SAP Xagent connection test

To check if a SAP Xagent using r3batch is properly configure, run the following command: 
       
./<TWS_home>/methods/r3batch -t PL -c Workstation_name -l "*" -j "*" -- "-debug"


Also additional details can be found in /TWS_home/methods/trace-r3batch.log if you add, for the problematic SAP jobs, "-trace" option.

Hint: To run SAP jobs using TWS you need TWS for Applications to be deployed, which can be installed directly on the TWS master or on on any FTA agent. 

Thursday, March 8, 2012

Add a new workstation to plan without waiting for FINAL

To add a new workstation to plan without waiting for FINAL job stream to run use the below command:
./<TWS_home>/JnextPlan -for 0000
The above command will "refresh" the plan, adds new workstation, archives the success jobs (if you dont want to archive the success jobs parameter -noremove should be also used) etc.

Hint: Make sure that all jobs have the CarryForward option to all jobs or the variable enCarryForward set to ALL

Tuesday, March 6, 2012

rep7 SQL query

rep7 is very slow and hard to use it, below is the SQL queries version for MAKEPLAN job (using ORACLE DB):
       
alter session set current_schema=maestro_DB_user;

SELECT DISTINCT AJS_NAME, AJB_NAME, JHR_START_TIME, JHR_RUN_DATE, JHR_TOTAL_ELAPSED_TIME, 
JHR_STATUS, WKC_NAME FROM JHR_JOB_HISTORY_RUNS
WHERE (AJB_NAME like 'MAKEPLAN') order by AJB_NAME

Job average run

The below SQL query calculates the job average run time (in our case MAKEPLAN) using the historical data stored in the DB (using ORACLE DB):
       
alter session set current_schema=maestro_DB_user;

select avg (JHR_TOTAL_ELAPSED_TIME) as average
FROM JHR_JOB_HISTORY_RUNS
where AJB_NAME like 'MAKEPLAN'

If you want to calculate the average run time only for success jobs run:
       
alter session set current_schema=maestro_DB_user;

select avg (JHR_TOTAL_ELAPSED_TIME) as average
FROM JHR_JOB_HISTORY_RUNS
where AJB_NAME like 'MAKEPLAN' and JHR_STATUS = 'S'

Hint: The historical data age is controlled by the variable statsHistory / sh, the default value is 10 days.



Jobs without Job Stream

To find out which Jobs does not belong to Job Stream, the easiest way is to run a SQL query (using ORACLE DB):
       
alter session set current_schema=maestro_DB_user;

SELECT JOD_NAME FROM JOD_JOB_DEFINITIONS
WHERE JOD_NAME NOT IN
(SELECT AJB_NAME FROM AJB_ABSTRACT_JOBS)