Showing posts with label Administration. Show all posts
Showing posts with label Administration. Show all posts

Sunday, 5 March 2017

How to get DataStage Project Settings


DSListProjectSettings.bat is used to report all available settings of existing projects.

USAGE: DSListProjectSettings <SERVER> <USER> <PASSWORD> <PROJECT>

The DSListProjectSettings batch file reports all available project settings and environment variable names and values to standard output. The first section shows available DataStage project properties and their settings. The second section shows environment variable names and values. When run successfully, each section will report a status code of zero.



..


Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 21 February 2017

PDF Split & Merge with Perl


Sharing two utility scripts (Perl), which can be used for extracting only few pages from a PDF file and also for combining different pdf files into a single pdf files. There might be lot of situation where in, we might require only few pages from a big PDF files. So instead of storing / carrying such a big PDF files, we could extract the page or pages which we require from the original big PDF files. Also we could combine different pdf files into a single pdf file using the second script. As an example, if we have extracted pages 10-15 and pages 100-120 from a big pdf file using the pdf_extract.pl and
we can combine these two pdf files (i.e. pdf which contains pages 10-15 and pdf which contains
pages 100-120) into a single pdf file using pdf_merge.pl


NOTE : These two perl scripts uses a perl module called PDF::API2. If this is not present on your system as part of the perl installation, you can download these modules from www.cpan.org and install. Please see the installation section for more details.


These two scripts can be used on windows, unix or linux. Currently tested on Windows with active perl 5.8.8, but it should work on unix and linux as well. For the pdf_extract.pl script to work on unix and linux, please change the variable called "path_separator" to "/" instead of "\\". This variable can be seen at the starting of the script. pdf_merge.pl can be used both on windows and unix/linux without any modification


Usage:

1) pdf_extract.pl

     perl pdf_extract.pl -i <input pdf file> -p <page or page range which needs to be extracted>

        where
        -i : Please give the full path to the input PDF file
        -p : Page Number or Page range which needs to be extracted from the input PDF

        example : To extract pages 3 to 5, execute

           perl pdf_extract.pl -i /tmp/abc.pdf -p 3-5

        example : To extract only page 3, execute

        perl pdf_extract.pl -i /tmp/abc.pdf -p 3


Executing with -h option will display the usage onto the screen

Example : perl ./pdf_extract.pl -h


2) pdf_merge.pl


perl pdf_merge.pl <output pdf file with full path> <input pdf file 1> <input pdf file 2> etc

Execute the script with all the pdf file which needs to be merged.
Script will merge in the same order which is given in the input

i.e. If you execute like pdf_merge.pl /tmp/out.pdf /tmp/abc.pdf /tmp/xyz.pdf

then pages from xyz.pdf will be after pages from abc.pdf

Executing with -h option will display the usage onto the screen

Example : ./pdf_merge.pl -h


CodeBase:

README File:
pdf_extract.pl
pdf_merge.pl







Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Wednesday, 7 December 2016

Import the jobs from DS windows client #iLoveScripting


As we have discussed a script which can export the datastage jobs from your client system (http://bit.ly/2frNPKj) likewise we can write another one to import the jobs. Let's see how -

DsImportJobsClient.bat :

This Script read all the *.dsx job name from the specified Directory and Sub-Directory and import to the Specified project. It can also build (Only BUILD) the existing package created on Information Server Manager and send it to the specified location on client machine.

To use the build feature you need to make sure the package has been created with all the needed jobs, saved and closed. Only update to the selected job will be taken care automatically. To add/delete a job, you need to do manually.

Modify the Import.properties and ImportJobList.txt file and Go the .bat dir and then execute the importAndBuild.bat.




Import.properties :


ImportJobList.txt :


DsImportJobsClient.bat :





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Wednesday, 12 October 2016

Script to Auto Compress the System Log Files


This Script is originally written by "Andy Welter" to compress the linux system log files. I have modified it to work better. You can find it below (modified version) -

Script Usage:
logroll [-compress|-nocompress]

$ logroll -compress
# compress the log files and move to archive directory

logroll -nocompress
# Move the file to archive directory with compressing it 


=================================================================== ===================================================================




Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Wednesday, 3 August 2016

#3 How to Copy DataSet from One Server to Another Server

This post is third and last of How to Copy DataSet from One Server to Another Server Series

We have generated a populated a dataset and identified the files which we need to move to another server serverB from serverA

Continue.......

4. Reading the dataset on another server

This is the most crucial step, Now all 4 files are moved on serverB or the common location which can be accessible from serverB.

For my case, common dir is my home - /home/users/atul


A. Change the default.apt file
We need to change the fastname in default.apt (config file) which we copied from the serverA, [ NOT the default.apt for serverB]

Open the file in any text editor or vi and change as below screen shot -


Temporarily create the "resource disk" and "resource scratchdisk" location if not existing as defined in above config file.

B. Copy the dataset data files 

Move the dataset data file from common directory to "resource disk" as defined in config file.

cp ~/dummy.ds.* /opt/IBM/InformationServer/Server/DataSets/


Now, all files locations are like -

Config file and Dataset descriptor file - my home dir or common dir
Dataset data files - /opt/IBM/InformationServer/Server/DataSets/


Design a job which will read thess dataset files and populate data into sequential file or any other output.


Job Paramaters -
APT_CONFIG_FILE = /home/users/atul/default.apt

DataSet Properties
DataSet File - /home/users/atul/dummy.ds

That is all, you can read the copied dataset on serverB, you can populate this data to some other output such as seq file, table so that you can avoid the use of copied default.apt config file which is not for serverB.

Try it out, let me know if you have any question.




If you like this post, follow the below pages to get update
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Monday, 1 August 2016

#2 How to Copy DataSet from One Server to Another Server


This post is second part of How to Copy DataSet from One Server to Another Server

Continue.......

 After generating the dummy dataset, next step is to identify the files which we need to copy.

2. Files which we need to move

a. APT_CONFIG_FILE - configuration file which used in dataset
b. DataSet Descriptor file - *.ds file, in our case it is dummy.ds
c. DataSet Data files - Actual data files which stored in RESOURCE DISK location

So let's get all the path which we need to access -

APT_CONFIG_FILE = /opt/IBM/InformationServer/Server/Configulations/default.apt
RESOURCE DISK = /opt/IBM/InformationServer/Server/DataSets
DATASET LOC = /home/users/atul/dummy.ds



Use commands or any FTP tool to copy these files in a shared location which can be accessible from another server (serverB)

For my case, I have stored all of them into my linux home direcory which is common in both server.

So I have executed these commands to copy all the required files into my home directory.


cp  /opt/IBM/InformationServer/Server/Configulations/default.apt ~
cp  /opt/IBM/InformationServer/Server/DataSets/dummy.ds.* ~
cp  /home/users/atul/dummy.ds ~


Now, my home directory is having these files -


You can copy these 4 files on serverB where you want to move your dataset. I am not doing the same as my home directory is common for both server.

3. Why we need these files only

Config file was used by datastage to create dataset ( descriptot file, data files, data file location)
So, we needed - config file, dataset descriptor file and dataset data files.





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Saturday, 30 July 2016

#1 How to Copy DataSet from One Server to Another Server



Hi Guys...
I've been asked so many times that how can we move/copy one dataset from one server to another So here is the way which I follow.

At very first step, Analyze if you can avoid this by using some other way like creating sequential file and ftp Or load the data into temporary table which can be accessible on another server, if using datastage packs then via mqs, xml or json formats etc. Why I am suggesting these solutions coz these are easy to design and guaranteed the data quality at other end.

If above solutions are not possible, please follow the below steps -

Points I am going to cover here -
1. Generating a dummy dataset
2. Files which we need to move
3. Why we need these files only
4. Reading the dataset on another server

http://www.datagenx.net/2016/06/datastage-quiz-1.html

 

1. Generating a dummy dataset

I have created a dummy job which is generating a dataset with default APT_Config_file which has 2 nodes.

http://www.datagenx.net/2015/12/how-to-use-universe-shell-uvsh-in.html





Here, I am generating 10 dummy rows with the help of Row Generator stage and storing them into a datasset.

a. Config File - I am using the default config file (replaced the server name in "fastname" with serverA)

APT_CONFIG_FILE = /opt/IBM/InformationServer/Server/Configulations/default.apt

http://www.datagenx.net/2016/06/5-tips-for-better-datastage-design-14.html

check out the "resource disk" location in config file, we need it for further processing

RESOURCE DISK = /opt/IBM/InformationServer/Server/DataSets

b. dataset location - I have created this dataset in my home dir named dummy.ds

DATASET LOC = /home/atul/dummy.ds


Keep looking for next post........





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 7 June 2016

Tuning SQL Statement #4



Using Functions in QueriesWhen using functions in SQL statements it is best to keep the functions away from any columns involving index matching unless function-based indexes are available and can be created.

Do’s   
SELECT account_name, trans_date, amount
FROM transaction
WHERE account_name LIKE 'CAPITAL%';    


Don’ts
SELECT account_name, trans_date, amount
FROM transaction
WHERE SUBSTR(account_name,1,7) = 'CAPITAL';




Sequences
A sequence is often used to create unique integer identifiers as primary keys for tables. A sequence is a distinct database object and is accessed as sequence.NEXTVAL and sequence.CURRVAL.
Use sequences for unique integer identifiers; they are centralized, more controllable, more easily maintained, and perform better than other methods of counting.

Do’s  
INSERT INTO supplier (supplier_id, name, ticker)
      VALUES (supplier_seq.NEXTVAL,'A new supplier', 'TICK');    


 Don’ts
INSERT INTO supplier (supplier_id, name, ticker)
      VALUES ((SELECT MAX(supplier_id)+1
      FROM supplier), 'A new supplier', 'TICK');


    

Equi, Anti, and Range
Using an equals sign (equi) is the fastest comparison condition if a unique index exists. Any type of anti comparison such as != or NOT is looking for what is not in a table and thus must read the entire table; sometimes full index scans can be used.

Do’s   
SELECT account_name, trans_date, amount
FROM transaction
WHERE amount > 0;    SELECT account_name, trans_date, amount
FROM transaction
WHERE amount != 0;
 

Don’ts
SELECT account_name, trans_date, amount
FROM transaction
WHERE amount > 0;    SELECT account_name, trans_date, amount
FROM transaction
WHERE amount  NOT = 0;



LIKE Pattern MatchingIn general, since LIKE will match patterns which are in no way related to indexes, LIKE will usually read an entire table.






Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Monday, 23 May 2016

Tuning SQL Statement #3


Tuning SQL Statement #1.......
Tuning SQL Statement #2.......

Consider whether a UNION ALL will suffice in place of a UNION.
The UNION statement performs the equivalent of a SELECT DISTINCT on the final result set. In other words, the UNION clause forces all rows returned by each portion of the UNION to be sorted and merged and duplicate to be filtered before the first row is returned.  A UNION ALL simply returns all rows including duplicates and does not have to perform any sort, merge or filter.  If your tables are mutually exclusive (include no duplicate records), or you don't care if duplicates are returned, the UNION ALL is much more efficient as it does not perform the SELECT DISTINCT function, which saves a lot of unnecessary resources from being used.

Do’s    
SELECT acct_num, balance_amt
FROM debit_transactions
WHERE tran_date = '1-JAN-06' 
UNION ALL
SELECT acct_num, balance_amt
FROM credit_transactions
WHERE tran_date = '1-JAN-06' ;  

Don’ts
SELECT acct_num, balance_amt
FROM debit_transactions
WHERE tran_date = '1-JAN-06'
UNION
SELECT acct_num, balance_amt
FROM credit_transactions
WHERE tran_date = '1-JAN-06' ;



http://www.datagenx.net/2016/04/tuning-select-statement-2.html
The DECODE Function
Using the DECODE function as a replacement for multiple query set operators is good for performance but should only be used in extreme cases such as the UNION clause joined SQL statements as shown.

Do’s  
SELECT stock_id||' '||
      DECODE(SIGN(qtyonhand)
            ,-1,'Out of Stock',0,'Out of Stock'
            ,1,DECODE(SIGN(qtyonhand-min)
                   ,-1,'Under Stocked',0,'Stocked'
                   ,1,DECODE(sign(qtyonhand-max)
                        ,-1,'Stocked',0,'Stocked'
                          ,1,'Over Stocked'
                   )
            )
      ) FROM stock;  
    

Don’ts
SELECT stock_id||' Out of Stock' FROM stock WHERE
   qtyonhand <=0
UNION
SELECT stock_id||' Under Stocked' FROM stock
      WHERE qtyonhand BETWEEN 1 AND min-1
UNION
SELECT stock_id||' Stocked' FROM stock
WHERE qtyonhand BETWEEN min AND max
UNION
SELECT stock_id||' Over Stocked' FROM stock
      WHERE qtyonhand > max;


    

Datatype Conversions
Try to avoid using any type of data conversion function in any part of an SQL statement which could potentially match an index, especially if you are trying to assist performance by matching appropriate indexes.

Do’s  
SELECT ...
FROM customer
WHERE cust_id = TO_NUMBER('11');    SELECT ...
FROM customer
WHERE cust_id = '11';

Don’ts
SELECT ...
FROM customer
WHERE cust_type = 1;    SELECT ...
FROM emp
WHERE TO_NUMBER (cust_type) = 1;






Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Wednesday, 27 April 2016

Tuning SQL Statement #2


Tuning SQL Statement #1.......
Tuning SQL Statement #3.......


•    The UNION statement performs the equivalent of a SELECT DISTINCT on the final result set. In other words, the UNION clause forces all rows returned by each portion of the UNION to be sorted and merged and duplicate to be filtered before the first row is returned.  A UNION ALL simply returns all rows including duplicates and does not have to perform any sort, merge or filter.  If your tables are mutually exclusive (include no duplicate records), or you don't care if duplicates are returned, the UNION ALL is much more efficient as it does not perform the SELECT DISTINCT function, which saves a lot of unnecessary resources from being used.

Do's-
SELECT acct_num, balance_amt FROM debit_transactions  WHERE tran_date = '1-JAN-06' 
UNION ALL
SELECT acct_num, balance_amt  FROM credit_transactions  WHERE tran_date = '1-JAN-06' ;  

Don'ts-
SELECT acct_num, balance_amt  FROM debit_transactions  WHERE tran_date = '1-JAN-06'
UNION
SELECT acct_num, balance_amt  FROM credit_transactions  WHERE tran_date = '1-JAN-06' ;



•    Try to avoid using any type of data conversion function in any part of an SQL statement which could potentially match an index, especially if you are trying to assist performance by matching appropriate indexes

Do's-
SELECT ...  FROM customer  

WHERE cust_id = TO_NUMBER('11');  

SELECT ...  FROM customer
WHERE cust_type = 1;    


Don'ts-
SELECT ...  FROM customer
WHERE cust_id = '11';

SELECT ...  FROM emp
WHERE TO_NUMBER (cust_type) = 1;



•    When using functions in SQL statements it is best to keep the functions away from any columns involving index matching unless function-based indexes are available and can be created.

Do's-
SELECT account_name, trans_date, amount  FROM transaction
WHERE account_name LIKE 'CAPITAL%';  

Don'ts-
SELECT account_name, trans_date, amount  FROM transaction
WHERE SUBSTR(account_name,1,7) = 'CAPITAL';



•    If your SELECT statement contains a HAVING clause, write your query so that the WHERE clause does most of the work (removing undesired rows) instead of the HAVING clause do the work of removing undesired rows. Using the WHERE clause appropriately can eliminate unnecessary rows before they get to the GROUP BY and HAVING clause, saving some unnecessary work, and boosting performance

Do's-
SELECT country, AVG (population)  FROM survey
WHERE country != 'INDIA'
AND country != 'CHINA';
GROUP BY country;  

Don'ts-
SELECT country, AVG (population)  FROM survey
GROUP BY country
HAVING country != ‘INDIA'
AND country != 'CHINA’;



•    The ORDER BY clause is always executed after the WHERE clause. Try not to override the WHERE clause with the ORDER BY clause because the Optimizer may choose a less efficient method of execution based on the ORDER BY clause. Try to put your where columns in order by clause if possible.

Do's-
SELECT * FROM ordersline
WHERE order_id < 10
ORDER BY order_id, seq#;  

Don'ts-
SELECT * FROM ordersline
WHERE order_id < 10
ORDER BY seq#;







Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 26 April 2016

Tuning SQL Statement #1



•    Try to use explicit columns and try to read columns in index orders if possible, even to the point of reading indexes and not tables.
•    Use a standard approach to table aliases.  If two identical SQL statements vary because an identical table has two different aliases, then the SQL is different and will not be shared.
•    Use table aliases and prefix all column names by their aliases when more than one table is involved in a query.  This reduces parse time AND prevents future syntax errors if someone adds a column to one of the tables with the same name as a column in another table.
•    Use bind variables.  The values of bind variables do not need to be the same for two statements to be considered identical.  Bind variables are not substituted until a statement has been successfully parsed.

i.e -
Do's     - Select FirstNme, MidName, LastName from atul.employee;
Don'ts  - Select * from atul.employee;



•    Try to match comparison condition column sequences with existing index column sequences, although it is not strictly necessary.
•    Always try to use unique, single-column indexes wherever possible. A single-column unique index is much more likely to produce exact hits. An exact hit is the fastest access method.
•    Try to avoid WHERE clauses that are NEGATIVE. Negative key words are in the WHERE clause, such as "IS NULL", "<>", "!=", "!>", "!<", "NOT", "NOT EXISTS", "NOT IN", "NOT LIKE"

i.e -
Do's     - Select FirstNme from atul.employee where DeptCd Like 'IN%';
Don'ts  - Select FirstNme from atul.employee where SubSTR(DeptCd, 1,2) = 'IN';

Do's     - Select Material from atul.store where quantity > 0;
Don'ts  - Select Material from atul.store where quantity <> 0;

Do's     - Select Material from atul.store where quantity > 3;
Don'ts  - Select Material from atul.store where quantity + 3 > 6;



Tuming SELECT Statement Continues...............




Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 1 March 2016

ETL Development Standards



These development standards provide consistency in the artifacts the ETL developers create.  This consistency improves testing, operational support, maintenance, and performance.  

Code Development Guidelines


1 Commenting in Code and Objects

As a primary guideline where it is possible and does not interfere with the operation of the applications, all code must contain a developer comment/note.  
All ETL jobs must have a proper annotation (short description of the functionality of the job).
The target output files (.csv files) should not contain any leading or trailing spaces.
While deciding record level delimiter, “Delimiter Collision” issue needs to be considered. No such delimiter should be used as a field defaults that is present as a part of data.

2 ETL Naming Standards

The standardized naming conventions ease the burden on developers switching from one project to another.  Knowing the names and where things are located are very useful to understand before the occurrence of the design and development phases.

The following table identifies DataStage elements and their standard naming convention.


2.1 Job and Properties Naming Conventions

GUI Component Entity Convention
Designer Parallel Job <<Application>>_<<job_Name>>_JOB
Designer Sequence  <<Application>>_<<job_Name>>_SEQ
Designer Server Job  <<Application>>_<<job_Name>>_SVR
Designer Parameter  <<Application>>_<<job_Name>>_PARM

2.2 Job Processing Stage Naming Conventions

GUI Component Entity Convention
Designer Aggregator  AGG_<<PrimaryFunction>>
Designer Copy  CP_<<PrimaryFunction>>
Designer Filter  FLT_<<PrimaryFunction>>
Designer Funnel  FNL_<<PrimaryFunction>>
Designer Join (Inner)  JN_<<PrimaryFunction>>
Designer FTP Enterprise FTP_<<PrimaryFunction>>
Designer Lookup  LKP_<< Value Name or table Name>>
Designer Merge  MRG_<<PrimaryFunction>>
Designer Modify  MOD_<<PrimaryFunction>>
Designer Sort  SRT_<<PrimaryFunction>>

2.3 Links Naming Conventions

GUI Component Entity Convention
Designer Reference (Lookup)  Lnk_Ref_<<Number or Additional descriptor, if needed to form a unique object name>>
Designer Reject (Lookup, File, DB)  Lnk_Rej_<<Number or Additional descriptor, if needed to form a unique object name>>
Designer Input  Lnk_In_<<Number or Additional descriptor, if needed to form a unique object name>>
Designer Output  Lnk_Out_<<Number or Additional descriptor, if needed to form a unique object name>>
Designer Delete  Lnk_Del_<<Number or Additional descriptor, if needed to form a unique object name>>
Designer Insert  Lnk_Ins_<<Number or Additional descriptor, if needed to form a unique object name>>
Designer Update  Lnk_Upd_<<Number or Additional descriptor, if needed to form a unique object name>>

2.4 Data Store Naming Conventions:

In the case of a data store, the class word refers to the type of data store (e.g. Dataset, Sequential File, Table, View, and so forth).

GUI Component Entity Convention
Designer Database  DB_<<DatabasName>>
Designer Table  TBL_<<TableName>>
Designer View  VIEW_<<ViewName>>
Designer Dimension  DM_<<TableName>>
Designer Fact  TRAN_<<TableName>>
Designer Source SRC_<<Table or Object Name>>
Designer  Target  TRGT_<<Table or objectName>>

2.5 File Stage Naming Conventions:

GUI Component Entity Convention
Designer Sequential File  SEQ_
Designer Complex Flat File  CFF_
Designer Parallel dataset  DS_







Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Monday, 1 February 2016

5 Tips For Better DataStage Design #8



#1. Templates have to be created to enhance reusability and enforce coding standard. Jobs should be created using templates.
#2. The template should contain the standard job flow along with proper naming conventions of components, proper Job level annotation and short/long description. Change record section should be kept in log description to keep track.



#3. Don't copy the job design only. copy using 'save as' or create copy option at the job level.
#4. The DataStage connection should be logged off after completion of work to avoid locked jobs.
#5. Creation of common lookup jobs
#6. Some extraction jobs can be created to created reference datasets. The datasets can then be used in different conversion modules





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx

Sunday, 31 January 2016

Create a Daemon to Trace New Processes


Description

The following code can be used to create a daemon that will watch for processes that show up in the "ps -ef" output with certain characteristics. When it identifies such processes, it will attach to them with a trace utility (i.e. strace, truss, tusc... you must change the code to reflect this on whatever platform this is run on). The tool does not follow these processes with a fork since it will trace any children that contain the same "ps -ef" characteristics. This makes it useful for tracing DS PX programs that contain rsh since truss's fork flag (i.e. "-f") blocks the rsh from executing.



Usage

The script below should be saved to a file such as /tmp/tracer.sh and given rwx permissions.  The trace utility name that is appropriate for your platform should be altered in the "ps -ef" command and in the "for" loop.  The script would then be run using this syntax:
    /tmp/tracer.sh <search string>
As mentioned above, the search string can be any value that would appear in the "ps -ef" output.  Such values might be a user id, particular time, a command, or arguments to a command.  The fifth and eight lines of this script gather lists of all commands to be traced and then attempts to remove commands that should be ignored.  If you find too many processes getting traced, identify why it was selected and then alter these two lines by adding a "grep -v" to the list of items bieng ignored.









Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx

Sunday, 17 January 2016

Count of Jobs - A Quick DataStage Recipe



What to Cook:
How to count number of jobs in DS project

Ingredients:
Use dsjobs "-ljobs" command



How to Cook:
Go to DS Administrator "projects" tab
Click on command button
Enter following command to execute:
SH -c "dsjob -ljobs <Project Name> | wc -l"

<Project Name> - Enter your project name






Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx

Tuesday, 15 December 2015

How to use Universe Shell (uvsh) in DataStage?


In DataStage Administration, we have to use datastage command line (universe shell) to get the information directly from the datastage universe database.

While accessing it from command line what novice admin do is -

$ uvsh
This directory is not set up for DataStage.
Would you like to set it up(Y/N)?   
Confused ? What to do ?

Always answer that question "no", it means you're in the wrong place.
Always launch "uvsh" or "dssh" from one of two places - $DSHOME or inside a project directory. For the latter you're good to go, for the former you'll need to LOGTO your project name before you issue any sql.



How to use UVSH?

## Entered into the $DSHOME
$ cd $DSHOME

## Sourced the dsenv file
$ . dsenv

## Change directory to the project directory.
$ LOGTO <project_name>

## Run uvsh command 
$ $DSHOME/bin/uvsh

Many Datastage admin support to execute command from Datastage Administrator or use dssh instead of uvsh.

How to use DSSH?
## Sourced the dsenv file
$ . $DSHOME/dsenv

## Change directory to the project directory.
$ LOGTO <project_name>

## Run dssh command 
$ $DSHOME/bin/dssh



Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx