Wednesday, 15 March 2017

JDBC DSN Configuration in IIB (WMB)


1. mqsilist :


2.
mqsireportproperties <BROKER> -c JDBCProviders -a -o AllReportableEntityNames

mqsireportproperties TESTNODE_atul.singh -c JDBCProviders -a -o AllReportableEntityNames




3.
mqsireportproperties <BROKER> -c JDBCProviders -o Microsoft_SQL_Server -r

mqsireportproperties TESTNODE_atul.singh -c JDBCProviders -o Microsoft_SQL_Server -r



4.
mqsicreateconfigurableservice <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n connectionUrlFormat -v "jdbc:sqlserver://<SERVER>:<PORT>;user=<USER>;password=<PASSWORD>;databaseName=<DB_NAME>"

mqsicreateconfigurableservice TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n connectionUrlFormat -v "jdbc:sqlserver://IRIS-CSG-338:1433;user=sa;password=password@1;databaseName=sample"



5.
mqsichangeproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n databaseName,databaseSchemaNames,portNumber,securityIdentity,serverName -v <DB_NAME>,dbo,1433,<USER>@<SERVER>,<SERVER>

mqsichangeproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n databaseName,databaseSchemaNames,portNumber,securityIdentity,serverName -v sample,dbo,1433,sa@IRIS-CSG-338,IRIS-CSG-338



6.
mqsichangeproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n jarsURL -v "<DRIVER_JAR_PATH>"

mqsichangeproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n jarsURL -v "C:\Program Files\dbDrivers\sqljdbc_4.2\enu\jre7"



7.
mqsichangeproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n type4DatasourceClassName,type4DriverClassName -v "com.microsoft.sqlserver.jdbc.SQLServerXADataSource","com.microsoft.sqlserver.jdbc.SQLServerDriver"

mqsichangeproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n type4DatasourceClassName,type4DriverClassName -v "com.microsoft.sqlserver.jdbc.SQLServerXADataSource","com.microsoft.sqlserver.jdbc.SQLServerDriver"



8.
mqsisetdbparms <BROKER> -n jdbc::<JDBC_DSN_NAME> -u <USER> -p <PASSWORD>

mqsisetdbparms TESTNODE_atul.singh -n jdbc::SQLSrvrJdbc -u sa -p password@1



9.
mqsireportproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -r

mqsireportproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -r



10.
mqsistop <BROKER>
mqsistart <BROKER>

mqsistop TESTNODE_atul.singh
mqsistart TESTNODE_atul.singh








Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Thursday, 9 March 2017

Perl Script to get content difference of a file


This Perl script will help you to get the content difference of a file available in two directories.

Usage: diffmany.pl filename dir1 dir2

Working: Script will pick the filename from argument and start comparing its content and print the
difference. Here assumption is the same file is available in two directories.
But we can alter this script to take two file name and one directory or based on our need.



View or Download : diffmany.pl




Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Sunday, 5 March 2017

How to get DataStage Project Settings


DSListProjectSettings.bat is used to report all available settings of existing projects.

USAGE: DSListProjectSettings <SERVER> <USER> <PASSWORD> <PROJECT>

The DSListProjectSettings batch file reports all available project settings and environment variable names and values to standard output. The first section shows available DataStage project properties and their settings. The second section shows environment variable names and values. When run successfully, each section will report a status code of zero.



..


Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 28 February 2017

ETL Preformance Tuning : Identification of Performance Bottleneck


The goal of performance tuning is to optimize session performance by eliminating performance bottlenecks. To tune the performance of a session, we need to identify a performance bottleneck, eliminate it, and then identify the next performance bottleneck until we are satisfied with the session performance. Test Load option can be used to run sessions while tune session performance.
The most common performance bottleneck occurs when the ETL Server writes to a target database. We can identify performance bottlenecks by the following methods:



Running test sessions. We can configure a test session to read from a flat file source or to write to a flat file target or any stage which can hold the data but not write, to identify source and target bottlenecks.

Studying performance details. We can create a set of information called performance details to identify session bottlenecks. Performance details provide information such as buffer input and output efficiency. Collect Performance Data option in Session Property will enable the Session to generate a Counter of Input and Output rows through each Transformation

Monitoring system performance. System monitoring tools can be used to view percent CPU usage, I/O waits, and paging to identify system bottlenecks.


Once the location of a performance bottleneck is determined, we can eliminate the bottleneck by following these guidelines:

Eliminate source and target database bottlenecks. 
Optimize the query, Increase the database network packet size, or   configuring index and key constraints.

Eliminate mapping bottlenecks. 
Fine tune the pipeline logic and transformation settings and options in  mappings to eliminate mapping bottlenecks.

Eliminate session bottlenecks. 
Session strategy can be optimized performance details can be used to help in tuning session  configuration.

Eliminate system bottlenecks. 
Have the system administrator analyze information from system monitoring tools and improve CPU and network performance.


If all the bottlenecks above are tuned, further optimization of session performance can be done by increasing the number of pipeline partitions in the session. Adding partitions can improve performance by utilizing more of the system hardware while processing the session.




Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Thursday, 23 February 2017

Shell script to access DataStage Director ETL joblogs



We have various data warehouses hosted on AIX/Linux/Unix operating systems. DataStage Director is one of the ETL monitoring & scheduling tool used in numerous data warehouses. In case of ETL job failures we need to login to DataStage Director and check the log for error messages. The ETL job logs can tend to span into several pages with various types of informative messages. Usually we need to locate the error message under ‘Fatal’ message type. Doing this task by searching the DataStage log manually can be very time consuming and tiring.




This shell script facilitates the user to access the error messages from the ease of the  Linux screen. The script also has facility to email the filtered log messages to the user’s mail box.


Ø  Accept jobname and other parameters while script execution.
Ø  Establish proper environment settings for local DataStage use
Ø  Locate event ids for fatal errors in the ETL joblog.
Ø  Extract detail log for fatal errors
Ø  Mail the filtered job log with exact fatal error message to the user.




Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 21 February 2017

PDF Split & Merge with Perl


Sharing two utility scripts (Perl), which can be used for extracting only few pages from a PDF file and also for combining different pdf files into a single pdf files. There might be lot of situation where in, we might require only few pages from a big PDF files. So instead of storing / carrying such a big PDF files, we could extract the page or pages which we require from the original big PDF files. Also we could combine different pdf files into a single pdf file using the second script. As an example, if we have extracted pages 10-15 and pages 100-120 from a big pdf file using the pdf_extract.pl and
we can combine these two pdf files (i.e. pdf which contains pages 10-15 and pdf which contains
pages 100-120) into a single pdf file using pdf_merge.pl


NOTE : These two perl scripts uses a perl module called PDF::API2. If this is not present on your system as part of the perl installation, you can download these modules from www.cpan.org and install. Please see the installation section for more details.


These two scripts can be used on windows, unix or linux. Currently tested on Windows with active perl 5.8.8, but it should work on unix and linux as well. For the pdf_extract.pl script to work on unix and linux, please change the variable called "path_separator" to "/" instead of "\\". This variable can be seen at the starting of the script. pdf_merge.pl can be used both on windows and unix/linux without any modification


Usage:

1) pdf_extract.pl

     perl pdf_extract.pl -i <input pdf file> -p <page or page range which needs to be extracted>

        where
        -i : Please give the full path to the input PDF file
        -p : Page Number or Page range which needs to be extracted from the input PDF

        example : To extract pages 3 to 5, execute

           perl pdf_extract.pl -i /tmp/abc.pdf -p 3-5

        example : To extract only page 3, execute

        perl pdf_extract.pl -i /tmp/abc.pdf -p 3


Executing with -h option will display the usage onto the screen

Example : perl ./pdf_extract.pl -h


2) pdf_merge.pl


perl pdf_merge.pl <output pdf file with full path> <input pdf file 1> <input pdf file 2> etc

Execute the script with all the pdf file which needs to be merged.
Script will merge in the same order which is given in the input

i.e. If you execute like pdf_merge.pl /tmp/out.pdf /tmp/abc.pdf /tmp/xyz.pdf

then pages from xyz.pdf will be after pages from abc.pdf

Executing with -h option will display the usage onto the screen

Example : ./pdf_merge.pl -h


CodeBase:

README File:
pdf_extract.pl
pdf_merge.pl







Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Thursday, 9 February 2017

get Queue Depths