Showing posts with label Datawarehouse. Show all posts
Showing posts with label Datawarehouse. Show all posts

Monday, 6 February 2017

Implementing Slowly Changing Dimension in ETL


Dimension is a term in data management and data warehousing that refers to logical groupings of data such as geographical location, customer information, or product information. Slowly Changing Dimensions (SCD) are dimensions that have data that slowly changes.

                    For example, you may have a Dimension in your database that tracks the sales records of your salespersons in different pharmacies. Creating sales reports seems simple enough, until a salesperson is transferred from one pharmacy to another. How do you record such a change in your sales Dimension?

When we need to track change, it is unacceptable to put everything into the fact table or make every dimension time-dependent to deal with these changes. We would quickly talk ourselves back into a full-blown normalized structure with the consequential loss of understand-ability and query performance. Instead, we take advantage of the fact that most dimensions are nearly constant over time. We can preserve the independent dimensional structure with only relatively minor adjustments to contend with the changes. We refer to these nearly constant dimensions as slowly changing dimensions. Since Ralph Kimball first introduced the notion of slowly changing dimensions in 1994, some IT professionals—in a never-ending quest to speak in acronym—have termed them SCDs.

For each attribute in our dimension tables, we must specify a strategy to handle change. In other words, when an attribute value changes in the operational world, how will we respond to the change in our dimensional models? In the following section we'll describe three basic techniques for dealing with attribute changes, along with a couple hybrid approaches. You may decide that you need to employ a combination of these techniques within a single dimension table.

Type 1: Overwrite the Value
With the type 1 response, we merely overwrite the old attribute value in the dimension row, replacing it with the current value. In so doing, the attribute always reflects the most recent assignment.

Type 2: Add a Dimension Row
A type 2 response is the predominant technique for maintaining the historical data when it comes to slowly changing dimensions. A type 2 SCD is a dimension where a new row is created when the value of an attribute changes

Type 3: Add a Dimension Column
While the type 2 response partitions history, it does not allow us to associate the new attribute value with old fact history or vice versa. A type 3 SCD is a dimension where an alternate old column is created when an attribute changes

Type 6: Hybrid Slowly Changing Dimension Techniques
The Type 6 method is one that combines the approaches of types 1, 2 and 3 (1 + 2 + 3 = 6). One possible explanation of the origin of the term was that it was coined by Ralph Kimball during a conversation with Stephen Pace from Kalido but has also been referred to by Tom Haughey. It is not frequently used because it has the potential to complicate end user access, but has some advantages over the other approaches especially when techniques are employed to mitigate the downstream complexity.
The approach is to use a Type 1 slowly changing dimension, but adding an additional pair of date columns to indicate the date range at which a particular row in the dimension applies and a flag to indicate if the record is the current record.

The aim is to move data from the source application system to the Reporting Warehouse for analysis and reporting of a worldwide pharmaceutical major. The code here is responsible for the movement of data between the Operational Data Store (ODS) and the data warehouse.

Implementation Flow:






Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Sunday, 27 March 2016

Data Warehouse Glossary #3


Drill Through:
Data analysis that goes from an OLAP cube into the relational database.

Data Warehousing:
The process of designing, building, and maintaining a data warehouse system.

Conformed Dimension:
A dimension that has exactly the same meaning and content when being referred from different fact tables.

Central Warehouse
A database created from operational extracts that adheres to a single, consistent, enterprise data model to ensure consistency of decision-support data across the corporation. A style of computing where all the information systems are located and managed from a single physical location.

Change Data Capture
The process of capturing changes made to a production data source. Change data capture is typically performed by reading the source DBMS log. It consolidates units of work, ensures data is synchronized with the original source, and reduces data volume in a data warehousing environment.

Classic Data Warehouse Development
The process of building an enterprise business model, creating a system data model, defining and designing a data warehouse architecture, constructing the physical database, and lastly populating the warehouses database.

Data Access Tools
An end-user oriented tool that allows users to build SQL queries by pointing and clicking on a list of tables and fields in the data warehouse.

Data Analysis and Presentation Tools

Software that provides a logical view of data in a warehouse. Some create simple aliases for table and column names; others create data that identify the contents and location of data in the warehouse.



Data Dictionary
A database about data and database structures. A catalog of all data elements, containing their names, structures, and information about their usage. A central location for metadata. Normally, data dictionaries are designed to store a limited set of available metadata, concentrating on the information relating to the data elements, databases, files and programs of implemented systems.

Data Warehouse Architecture
An integrated set of products that enable the extraction and transformation of operational data to be loaded into a database for end-user analysis and reporting.

Data Warehouse Architecture Development
A SOFTWARE AG service program that provides an architecture for a data warehouse that is aligned with the needs of the business. This program identifies and designs a warehouse implementation increment and ensures the required infrastructure, skill sets, and other data warehouse foundational aspects are in place for a Data Warehouse Incremental Delivery.

Data Warehouse Engines
Relational databases (RDBMS) and Multi-dimensional databases (MDBMS). Data warehouse engines require strong query capabilities, fast load mechanisms, and large storage requirements.

Data Warehouse Incremental Delivery
A SOFTWARE AG program that delivers one data warehouse increment from design review through implementation.

Data Warehouse Infrastructure
A combination of technologies and the interaction of technologies that support a data warehousing environment.

Data Warehouse Management Tools
Software that extracts and transforms data from operational systems and loads it into the data warehouse.

Data Warehouse Network
An industry organization for know-how exchange. SOFTWARE AG was the first vendor member of the Data Warehouse Network.

Functional Data Warehouse
A warehouse that draws data from nearby operational systems. Each functional warehouse serves a distinct and separate group (such as a division), functional area (such as manufacturing), geographic unit, or product marketing group.

OLTP
On-Line Transaction Processing. OLTP describes the requirements for a system that is used in an operational environment.

Scalability
The ability to scale to support larger or smaller volumes of data and more or less users. The ability to increase or decrease size or capability in cost-effective increments with minimal impact on the unit cost of business and the procurement of additional services.

Schema
The logical and physical definition of data elements, physical characteristics and inter-relationships.

Slice and Dice
A term used to describe a complex data analysis function provided by MDBMS tools.

Warehouse Business Directory
Provides business professionals access to the data warehouse by browsing a catalog of contents.

Warehouse Technical Directory
Defines and manages an information life cycle, a definition of warehouse construction, change management, impact analysis, distribution and operation of a warehouse.

Transformers
Rules applied to change data.



Monday, 21 March 2016

Data Warehouse Glossary #2


Dimension:
A variable, perspective or general category of information that is used to organize and analyze information in a multi-dimensional data cube.

Drill Down:
The ability of a data-mining tool to move down into increasing levels of detail in a data mart, data warehouse or multi-dimensional data cube.

Drill Up:
The ability of a data-mining tool to move back up into higher levels of data in a data mart, data warehouse or multi-dimensional data cube.

Executive Information Management System (EIS):
A type of decision support system designed for executive management that reports summary level information as opposed to greater detail derived in a decision support system.

Extraction, Transformation and Loading (ETL) Tool:
Software that is used to extract data from a data source like a operational system or data warehouse, modify the data and then load it into a data mart, data warehouse or multi-dimensional data cube.

Granularity:
The level of detail in a data store or report.

Hierarchy:
The organization of data, e.g. a dimension, into a outline or logical tree structure.  The strata of a hierarchy are referred to as levels.  The individual elements within a level are referred to as categories.  The next lower level in a hierarchy is the child; the next higher level containing the children is their parent.

Legacy System:
Older systems developed on platforms that tend to be one or more generations behind the current state-of-the-art applications.  Data marts and warehouses were developed in large part due to the difficulty in extracting data from these system and the inconsistencies and incompatibilities among them.

Level:
A tier or strata in a dimensional hierarchy. Each lower level represents an increasing degree of detail.  Levels in a location dimension might include country, region, state, county, city, zip code, etc.

Measure:
A quantifiable variable or value stored in a multi-dimensional OLAP cube.  It is a value in the cell at the intersection of two or more dimensions.

Member:
One of the data points for a level of a dimension.

Meta Data:
Information in a data mart or warehouse that describes the tables, fields, data types, attributes and other objects in the data warehouse and how they map to their data sources.  Meta data is contained in database catalogs and data dictionaries.

Multi-Dimensional Online Processing (MOLAP):
Software that creates and analyzes multi-dimensional cubes to store its information.



Non-Volatile Data:
Data that is static or that does not change.  In transaction processing systems the data is updated on a continual regular basis.  In a data warehouse the database is added to or appended, but the existing data seldom changes.

Normalization:
The process of eliminating duplicate information in a database by creating a separate table that stores the redundant information.  For example, it would be highly inefficient to re-enter the address of an insurance company with every claim.  Instead, the database uses a key field to link the claims table to the address table.  Operational or transaction processing systems are typically “normalized”.  On the other hand, some data warehouses find it advantageous to de-normalize the data allowing for some degree of redundancy.

Online Analytical Processing (OLAP):
The process employed by multi-dimensional analysis software to analyze the data resident in data cubes.  There are different types of OLAP systems named for the type of database employed to create them and the data structures produced.

Open Database Connectivity (ODBC):
A database standard developed by Microsoft and the SQL Access Group Consortium that defines the “rules” for accessing or retrieving data from a database.

Relational Database Management System:
Database management systems that have the ability to link tables of data through a common or key field.  Most databases today use relational technologies and support a standard programming language called Structured Query Language (SQL).

Relational Online Analytical Processing (ROLAP):
OLAP software that employs a relational strategy to organize and store the data in its database.

Replication:
The process of copying data from one database table to another.

Scalable:
The attribute or capability of a database to significantly expand the number of records that it can manage.  It also refers to hardware systems and their ability to be expanded or upgraded to increase their processing speed and handle larger volumes of data.

Structured Query Language (SQL):
A standard programming language used by contemporary relational database management systems.

Synchronization:
The process by which the data in two or more separate database are synchronized so that the records contain the same information.  If the fields and records are updated in one database the same fields and records are updated in the other.

Dimensional Model: 
A type of data modeling suited for data warehousing. In a dimensional model, there are two types of tables: dimensional tables and fact tables. Dimensional table records information on each dimension, and fact table records all the "fact", or measures.

Dimensional Table: 
Dimension tables store records related to this particular dimension. No facts are stored in a dimensional table.

Drill Across:
Data analysis across dimensions.

Friday, 18 March 2016

Data Warehouse Glossary #1



Ad Hoc Query:

A database search that is designed to extract specific information from a database.  It is ad hoc if it is designed at the point of execution as opposed to being a “canned” report.  Most ad hoc query software uses the structured query language (SQL).

Aggregation:

The process of summarizing or combining data.

Catalog:

A component of a data dictionary that describes and organizes the various aspects of a database such as its folders, dimensions, measures, prompts, functions, queries and other database objects.  It is used to create queries, reports, analyses and cubes.

Cross Tab:

A type of multi-dimensional report that displays values or measures in cells created by the intersection of two or more dimensions in a table format.

Dashboard:

A data visualization method and workflow management tool that brings together useful information on a series of screens and/or web pages.  Some of the information that may be contained on a dashboard includes reports, web links, calendar, news, tasks, e-mail, etc.  When incorporated into a DSS or EIS key performance indicators may be represented as graphics that are linked to various hyperlinks, graphs, tables and other reports.  The dashboard draws its information from multiple sources applications, office products, databases, Internet, etc.

Cube:

A multi-dimensional matrix of data that has multiple dimensions (independent variables) and measures (dependent variables) that are created by an Online Analytical Processing System (OLAP).  Each dimension may be organized into a hierarchy with multiple levels.  The intersection of two or more dimensional categories is referred to as a cell.


Data-based Knowledge:

Factual information used in the decision making process that is derived from data marts or warehouses using business intelligence tools.  Data warehousing organizes information into a format so that it represents an organizations knowledge with respect to a particular subject area, e.g. finance or clinical outcomes.

Data Cleansing:

The process of cleaning or removing errors, redundancies and inconsistencies in the data that is being imported into a data mart or data warehouse.  It is part of the quality assurance process.

Data Mart:

A database that is similar in structure to a data warehouse, but is typically smaller and is focused on a more limited area.  Multiple, integrated data marts are sometimes referred to as an Integrated Data Warehouse.  Data marts may be used in place of a larger data warehouse or in conjunction with it.  They are typically less expensive to develop and faster to deploy and are therefore becoming more popular with smaller organizations.

Data Migration:

The transfer of data from one platform to another.  This may include conversion from one language, file structure and/or operating environment to another.

Data Mining:

The process of researching data marts and data warehouses to detect specific patterns in the data sets.  Data mining may be performed on databases and multi-dimensional data cubes with ad hoc query tools and OLAP software.  The queries and reports are typically designed to answer specific questions to uncover trends or hidden relationships in the data.

Data Scrubbing:

See Data Cleansing


Data Transformation:

The modification of transaction data extracted from one or more data sources before it is loaded into the data mart or warehouse.  The modifications may include data cleansing, translation of data into a common format so that is can be aggregated and compared, summarizing the data, etc.

Data Warehouse:

An integrated, non-volatile database of historical information that is designed around specific content areas and is used to answer questions regarding an organizations operations and environment.

Database Management System:

The software that is used to create data warehouses and data marts.  For the purposes of data warehousing, they typically include relational database management systems and multi-dimensional database management systems.  Both types of database management systems create the database structures, store and retrieve the data and include various administrative functions.

Decision Support System (DSS):

A set of queries, reports, rule-based analyses, tables and charts that are designed to aid management with their decision-making responsibilities.  These functions are typically “wrapped around” a data mart or data warehouse.  The DSS tends to employ more detailed level data than an EIS.







Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Monday, 7 March 2016

Data Warehouse Approaches #2



Top-down approach(Inmon)

The top-down approach views the data warehouse from the top of the entire analytic environment.

The data warehouse holds atomic or transaction data that is extracted from one or more source systems and integrated within a normalized, enterprise data model. From there, the data is summarized, dimensionalized, and distributed to one or more “dependent” data marts. These data marts are “dependent” because they derive all their data from a centralized data warehouse.

Sometimes, organizations supplement the data warehouse with a staging area to collect and store source system data before it can be moved and integrated within the data warehouse. A separate staging area is particularly useful if there are numerous source systems, large volumes of data, or small batch windows with which to extract data from source systems.


Pros/Cons 

The major benefit of a “top-down” approach is that it provides an integrated, flexible architecture to support downstream analytic data structures.
First, this means the data warehouse provides a departure point for all data marts, enforcing consistency and standardization so that organizations can achieve a single version of the truth. Second, the atomic data in the warehouse lets organizations re-purpose that data in any number of ways to meet new and unexpected business needs.

For example, a data warehouse can be used to create rich data sets for statisticians, deliver operational reports, or support operational data stores (ODS) and analytic applications. Moreover, users can query the data warehouse if they need cross-functional or enterprise views of the data.

On the downside, a top-down approach may take longer and cost more to deploy than other approaches, especially in the initial increments. This is because organizations must create a reasonably detailed enterprise data model as well as the physical infrastructure to house the staging area, data warehouse, and the data marts before deploying their applications or reports. (Of course, depending on the size of an implementation, organizations can deploy all three “tiers” within a single database.) This initial delay may cause some groups with their own IT budgets to build their own analytic applications. Also, it may not be intuitive or seamless for end users to drill through from a data mart to a data warehouse to find the details behind the summary data in their reports.


Bottom-up approach(Kimball)

In a bottom-up approach, the goal is to deliver business value by deploying dimensional data marts as quickly as possible. Unlike the top-down approach, these data marts contain all the data — both atomic and summary — that users may want or need, now or in the future. Data is modeled in a star schema design to optimize usability and query performance. Each data mart builds on the next, reusing dimensions and facts so users can query across data marts, if desired, to obtain a single version of the truth as well as both summary and atomic data.

The “bottom-up” approach consciously tries to minimize back-office operations, preferring to focus an organization’s effort on developing dimensional designs that meet end-user requirements. The “bottom-up” staging area is non-persistent, and may simply stream flat files from source systems to data marts using the file transfer protocol. In most cases, dimensional data marts are logically stored within a single database. This approach minimizes data redundancy and makes it easier to extend existing dimensional models to accommodate new subject areas.


Pros/Cons 

The major benefit of a bottom-up approach is that it focuses on creating user-friendly, flexible data structures using dimensional, star schema models. It also delivers value rapidly because it doesn’t lay down a heavy infrastructure up front.
Without an integration infrastructure, the bottom-up approach relies on a “dimensional bus” to ensure that data marts are logically integrated and stovepipe applications are avoided. To integrate data marts logically, organizations use “conformed” dimensions and facts when building new data marts. Thus, each new data mart is integrated with others within a logical enterprise dimensional model.
Another advantage of the bottom-up approach is that since the data marts contain both summary and atomic data, users do not have to “drill through” from a data mart to another structure to obtain detailed or transaction data. The use of a staging area also eliminates redundant extracts and overhead required to move source data into the dimensional data marts.

One problem with a bottom-up approach is that it requires organizations to enforce the use of standard dimensions and facts to ensure integration and deliver a single version of the truth. When data marts are logically arrayed within a single physical database, this integration is easily done. But in a distributed, decentralized organization, it may be too much to ask departments and business units to adhere and reuse references and rules for calculating facts. There can be a tendency for organizations to create “independent” or non-integrated data marts.

In addition, dimensional marts are designed to optimize queries, not support batch or transaction processing. Thus, organizations that use a bottom-up approach need to create additional data structures outside of the bottom-up architecture to accommodate data mining, ODSs, and operational reporting requirements. However, this may be achieved simply by pulling a subset of data from a data mart at night when users are not active on the system.







Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/

Thursday, 3 March 2016

Data Warehouse Approaches #1



It has been said there are as many ways to build data warehouses as there are companies to build them. Each data warehouse is unique because it must adapt to the needs of business users in different functional areas, whose companies face different business conditions and competitive pressures.
Nonetheless, three major approaches to building a data warehousing environment exist. These approaches are generally referred to as:

1. Top-down
2. Bottom-up
3. Hybrid

Although we have been building data warehouses since the early 1990s, there is still a great deal of confusion about the similarities and differences among these architectures. This is especially true of the "top-down" and "bottom-up" approaches, which have existed the longest and occupy the polar ends of the development spectrum.
As a result, some organizations fail to adopt a clear vision for the way the data warehousing environment can and should evolve. Others, paralysed by confusion or fear of deviating from prescribed tenets for success, cling too rigidly to one approach or another, undermining their ability to respond flexibly to new or unexpected situations. Ideally, organizations need to borrow concepts and tactics from each approach to create environments that uniquely meets their needs.



Semantic and Substantive Differences The two most influential approaches are championed by industry heavyweights Bill Inmon and Ralph Kimball, both prolific authors and consultants in the data warehousing field.
Inmon, who is credited with coining the term "data warehousing" in the early 1990s, advocates a top-down approach, in which companies first build a data warehouse followed by data marts.
Kimball’s approach, on the other hand, is often called bottom-up because it starts and ends with data marts, negating the need for a physical data warehouse altogether.

On the surface, there is considerable friction between top-down and bottom-up approaches. But in reality, the differences are not as stark as they may appear. Both approaches advocate building a robust enterprise architecture that adapts easily to changing business needs and delivers a single version of the truth. In some cases, the differences are more semantic than substantive in nature. For example, both approaches collect data from source systems into a single data store, from which data marts are populated. But while "top-down" subscribers call this a data warehouse, "bottom-up" adherents often call this a "staging area."

Nonetheless, significant differences exist between the two approaches, Data warehousing professionals need to understand the substantial, subtle, and semantic differences among the approaches and which industry "gurus" or consultants advocate each approach. This will provide a clearer understanding of the different routes to achieve data warehousing success and how to translate between the advice and rhetoric of the different approaches.





Like the below page to get update  
https://www.facebook.com/datastage4you
https://twitter.com/datagenx
https://plus.google.com/+AtulSingh0/posts
https://datagenx.slack.com/messages/datascience/