Showing posts with label Configuration. Show all posts
Showing posts with label Configuration. Show all posts

Sunday, 11 November 2018

WebServices Health Check Report in Python


Continuing Converting PFX Certs to Certificate and Key Files using OpenSSL....., As discussing about a requirement to generate a health check report for web-services without much of human intervention. Though there are lots of open source and proprietary tools available which can do this stuff in few clicks but I have tried to write something in python which is capable of doing pretty much same and provide more customization.




WebService_HealthCheck.py:

 
WebService_HealthCheck_QA.config


Config file contains the columns as below -


ID|ServiceName|URL|Request


This python code contains 3 functions, 1 GET REST CALL, 1 POST REST CALL, and 1 FILE WRITE operation, we can add more functions which can parse the response and take action as defined.
While writing this code, I have assumed that every service all is HTTPS type which need a certificate to make a success call to service host server. Though, you can omit this setting if your service is simple HTTP type.

As I said, this baseline code is just a skeleton for your service health check. add more and more functions to automate your boring stuff :-)





Like the below page to get update  
Facebook Page      Facebook Group      Twitter Feed      Google+ Feed      Telegram Group     


Sunday, 14 January 2018

Mongo DB - Installation and Configuration


MongoDB  is an open-source document database, and the leading NoSQL database. Written in C++.
  
MongoDB features:
    Document-Oriented Storage
    Full Index Support
    Replication & High Availability
    Auto-Sharding
    Querying
    Fast In-Place Updates
    Map/Reduce
    GridFS


Reduce cost, accelerate time to market, and mitigate risk with proactive support and enterprise-grade capabilities.


Today, We will see how to install and run the MongoDB.

MongoDB Installation on Linux


1. DOWNLOAD the stable version of MongoDB. It will a tar file
2. Extract the tar file to some directory.
 
$ tar -xvf mongodb.tar -C /learn/mongodb


3.  change the permisson of folder to user who run the db here-  In my case User - hduser and Group - hadoop
$ chown -R hduser:hadoop /learn/mongodb

4. Add the env var in .bashrc
export MONGO_HOME=/learn/mongodb
export PATH=$PATH:$MONGO_HOME/bin







5. Create the default DB directory for Mongo
$ mkdir -R /data/db
$ chown -R hduser:hadoop /data/db

This is by default, you can specify ur db path when starting the mongo db






$ mongod --dbpath /app/mongodata
this command will start the mongodb. in other terminal you can start work on db. "--dbpath /app/mongodata" is totally optional

If you just use just $ mongod , it will start n use the default db which we have defined in step 5.


Please don't close the current terminal, It can be kill the mongodb process.







6. Start working on MongoDB
$ mongo










Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Wednesday, 15 March 2017

JDBC DSN Configuration in IIB (WMB)


1. mqsilist :


2.
mqsireportproperties <BROKER> -c JDBCProviders -a -o AllReportableEntityNames

mqsireportproperties TESTNODE_atul.singh -c JDBCProviders -a -o AllReportableEntityNames




3.
mqsireportproperties <BROKER> -c JDBCProviders -o Microsoft_SQL_Server -r

mqsireportproperties TESTNODE_atul.singh -c JDBCProviders -o Microsoft_SQL_Server -r



4.
mqsicreateconfigurableservice <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n connectionUrlFormat -v "jdbc:sqlserver://<SERVER>:<PORT>;user=<USER>;password=<PASSWORD>;databaseName=<DB_NAME>"

mqsicreateconfigurableservice TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n connectionUrlFormat -v "jdbc:sqlserver://IRIS-CSG-338:1433;user=sa;password=password@1;databaseName=sample"



5.
mqsichangeproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n databaseName,databaseSchemaNames,portNumber,securityIdentity,serverName -v <DB_NAME>,dbo,1433,<USER>@<SERVER>,<SERVER>

mqsichangeproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n databaseName,databaseSchemaNames,portNumber,securityIdentity,serverName -v sample,dbo,1433,sa@IRIS-CSG-338,IRIS-CSG-338



6.
mqsichangeproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n jarsURL -v "<DRIVER_JAR_PATH>"

mqsichangeproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n jarsURL -v "C:\Program Files\dbDrivers\sqljdbc_4.2\enu\jre7"



7.
mqsichangeproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -n type4DatasourceClassName,type4DriverClassName -v "com.microsoft.sqlserver.jdbc.SQLServerXADataSource","com.microsoft.sqlserver.jdbc.SQLServerDriver"

mqsichangeproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -n type4DatasourceClassName,type4DriverClassName -v "com.microsoft.sqlserver.jdbc.SQLServerXADataSource","com.microsoft.sqlserver.jdbc.SQLServerDriver"



8.
mqsisetdbparms <BROKER> -n jdbc::<JDBC_DSN_NAME> -u <USER> -p <PASSWORD>

mqsisetdbparms TESTNODE_atul.singh -n jdbc::SQLSrvrJdbc -u sa -p password@1



9.
mqsireportproperties <BROKER> -c JDBCProviders -o <JDBC_DSN_NAME> -r

mqsireportproperties TESTNODE_atul.singh -c JDBCProviders -o SQLSrvrJdbc -r



10.
mqsistop <BROKER>
mqsistart <BROKER>

mqsistop TESTNODE_atul.singh
mqsistart TESTNODE_atul.singh








Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Tuesday, 11 October 2016

5 Tips For Better DataStage Design #16


1. Use 4-node configuration file for unit testing/system testing the job.
2. If there are multiple jobs to be run for the same module. Archive the source files in the after job routine of the last job.
3. Check whether the file exists in the landing directory before moving the sequential file. The ‘mv’ command will move the landing directory if the file is not found.

4. Ensure that the unix files created by any Datastage job is created by the same unix user who has run the job.
5. Make sure that the Short Job Description is filled using ‘Description Annotation’ and it contains the job name as part of the description. Don’t use Annotation for putting the job description.





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Wednesday, 23 March 2016

5 Tips For Better DataStage Design #11




  • When writing intermediate results that will only be shared between parallel jobs, always write to persistent data sets (using Data Set stages). You should ensure that the data is partitioned, and that the partitions, and sort order, are retained at every stage. Avoid format conversion or serial I/O.
  • Data Set stages should be used to create restart points in the event that a job or sequence needs to be rerun. But, because data sets are platform and configuration specific, they should not be used for long-term backup and recovery of source data.
  • Depending on available system resources, it might be possible to optimize overall processing time at run time by allowing smaller jobs to run concurrently. However, care must be taken to plan for scenarios when source files arrive later than expected, or need to be reprocessed in the event of a failure.
  • Parallel configuration files allow the degree of parallelism and resources used by parallel jobs to be set dynamically at run time. Multiple configuration files should be used to optimize overall throughput and to match job characteristics to available hardware resources in development, test, and production modes.
  • The proper configuration of scratch and resource disks and the underlying file system and physical hardware architecture can significantly affect overall job performance.






Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Thursday, 27 August 2015

VI Editor in Linux - 1





Introduction to vi Editor 


  • Full-screen editor
  • Two modes of operation: command and text
  • Utilizes one letter commands
  • Does not format text
  • Flexible search and replace facility with pattern matching
  • Allows for user-defined editing functions using macros

Modes of Operation



The vi editor has two modes of operation. Command mode allows the user to perform
operations on the text, such as cut, paste, cursor movement, and replacement. These
operations are all invoked by one character “commands.” Text mode (also known as
insert mode) allows the user to enter text; all characters typed are inserted into the file.

There is no way to tell which mode the editor is in, other than pressing a key on the
keyboard. If the key you press is inserted into the file, the current mode is text mode. If
the key performs an action, the current mode is command mode.

Switching from command mode to text mode is done with one of the insert text
commands, which will be covered later. Returning to command mode is done by
pressing the <Esc> key.


Starting vi 


$ vi filename
  • If the file "filename" does not exist, it will be created
  • Otherwise, vi will open the existing file
When a session is initiated, one of two things happens:
  • If the file to be edited exists, a copy of the file is put into a buffer in temporary by default.
  • If the file does not exist, an empty buffer is opened for this session.
The tilde characters represent empty lines in the editor.  The editor starts in command mode.





Saturday, 15 August 2015

MongoDB - Installation and Configuration in Linux





MongoDB  is an open-source document database, and the leading NoSQL database. Written in C++.
  
MongoDB features:


    Document-Oriented Storage
    Full Index Support
    Replication & High Availability
    Auto-Sharding
    Querying
    Fast In-Place Updates
    Map/Reduce
    GridFS


Reduce cost, accelerate time to market, and mitigate risk with proactive support and enterprise-grade capabilities.

Today, We will see how to install and run the MongoDB.