Showing posts with label conversion. Show all posts
Showing posts with label conversion. Show all posts

Friday, 7 July 2017

ICONV mystery - the UV function


Iconv (Internal CONVersion) is a function supported by UniVerse DB (UV db) to convert the DATA, not only DATE, into internal format. DataStage Server Jobs are using lots of UV functions to manipulate the data.

Today, I will try to unwrap the mystery behind the Iconv function and try to put the details in simpler words. We will not go into data conversion details but date conversion which is used by DataStage :-)

Like most of other date functions (Parallel one), Iconv also accept the date(string) and its format.

Suppose, Date =   June 17, 2017

To Convert this date into internal format, we have to use -

Iconv("2017-06-17", D-YMD)  = 18066
Iconv("2017/06/17", D/YMD)   = 18066
Iconv("2017:17:06", D:YDM)  = 18066
Iconv("17-06-17", D-Y2MD)    = 18066



D-  --> D for Delimiter followed by delimiter char
Y --> year in YYYY
M --> month in MM
D --> date in DD

As we can see, if we provide the date format with date string, Iconv convert the date to an integer no and it is very important to do because now datastage can understand the given date and we can use Oconv function to re-format the date as required.

I will cover Oconv in next post, till then Keep Learning !!




Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Monday, 1 February 2016

5 Tips For Better DataStage Design #8



#1. Templates have to be created to enhance reusability and enforce coding standard. Jobs should be created using templates.
#2. The template should contain the standard job flow along with proper naming conventions of components, proper Job level annotation and short/long description. Change record section should be kept in log description to keep track.



#3. Don't copy the job design only. copy using 'save as' or create copy option at the job level.
#4. The DataStage connection should be logged off after completion of work to avoid locked jobs.
#5. Creation of common lookup jobs
#6. Some extraction jobs can be created to created reference datasets. The datasets can then be used in different conversion modules





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx

Monday, 2 November 2015

5 Tips For Better DataStage Design #4



1) While using AsInteger() function in datastage transformer always trim the imput column before passing it to function because if there are extra spaces or unwanted characters which generates zeros when actual integer values are expected. We should use APT_STRING_PADCHAR=0x20 (space) env var for fixed field padding.

2) The Len(col) will return the incorrect length if the input column is having some non-ASCII or double byte characters in it. So check your NLS settings for the job to fix this.



3) To remove embedded spaces from decimal data, use StripWhiteSpace(input.field) function to remove all spaces.

4) To get the datastage job no, Open the log view of the job in datastage director and double click on any entry of the log. The job number will be listed under the field "Job Number:"

5) Set these 2 parameters APT_NO_PART_INSERTION, APT_NO_SORT_INSERTION to TRUE to avoid datastage to insert partitioning or sorting method to improve the job performance at compile job. This will remove the warning also "When checking operator: User inserted sort "<name>" does not fulfill the sort requirements of the downstream operator "<name>""





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://groups.google.com/forum/#!forum/datagenx