Re-Keying Surrogate Key For Dimension & Fact Tables. Need, Impact and Fix
A surrogate key is an artificial key that is used as a substitute for a natural key. Every surrogate key points to a dimension record, which represent the state of the dimension record at a point in time. We join between dimension tables and fact tables using surrogate keys to get the factual information at a point in time. In this article lets see the need of surrogate key re-keying, the impact of re-keying and possible fix.
Change Data Capture (CDC) Implementation for Multi Sourced ETL Processes
We have discussed couple of different options for Change Data Capture including a Change Data Capture Framework in our prior discussions. Implementing change capture for ETL process which involves multiple data source needs special care to capture changes from any of your data source. Here in this article lets see CDC implementation for ETL Process which involve multiple data sources.
Continue Reading
Initial History Building Algorithm for Slowly Changing Dimensions
Building initial history for a Data Warehouse is a complex and time consuming task. It involve taking into account of all the date intervals from different source tables during which the source system’s representation of data in any of the tables feeding into the Dimension Tables. So we can imagine the history building complexity and the need of a reusable algorithm.
5 Restartability Design Pattern for Different Type ETL Loads
Restartable ETL jobs are very crucial to job failure recovery, supportability and data quality of any ETL System. So you need to build your ETL system around the ability to recover from abnormal ending of a job and restart. So a well designed ETL system should have a good restartable mechanism. In this article lets discuss ETL restartability approaches to support different type of ETL Jobs such as Dimension loads, Fact Loads etc...
SCD Type 1 Implementation using Informatica PowerCenter
Unlike SCD Type 2, Slowly Changing Dimension Type 1 do not preserve any history versions of data. This methodology overwrites old data with new data, and therefore stores only the most current information. In this article lets discuss the step by step implementation of SCD Type 1 using Informatica PowerCenter.
Continue Reading
Design approach to Update Huge Tables Using Oracle MERGE
One of the issues we come across during the ETL design is "Update Large Tables". This is a very common ETL scenarion especially when you treat with large volume of data like loading an SCD Type 2 Dimension. We discussed about a design approach for this scenarion in one of our prior articles. Here in this updated article lets discuss a different approach to update Larger tables using Informatica Mapping.
Continue Reading
SCD Type 3 Implementation using Informatica PowerCenter
Unlike SCD Type 2, Slowly Changing Dimension Type 3 preserves only few history versions of data, most of the time 'Current' and Previous' versions. The 'Previous' version value will be stored into the additional columns with in the same dimension record. In this article lets discuss the step by step implementation of SCD Type 3 using Informatica PowerCenter.
Continue Reading
Change Data Capture (CDC) Implementation Using CHECKSUM Number
Typically we use a date column or a flag column to identify the change record for change data capture implementation. But there can be scenarios where you source do not have any columns to identify the changed records, especially when working with legacy systems. Today in this article lets see how to implement Change Data Capture or CDC for such scenarios using checksum number.
Continue Reading
Subscribe to:
Posts (Atom)
About US | Contact US | Advertise | Guest Post | Terms and Conditions | Privacy Policy | Disclaimer |
© 2012-2017 Data Integration Solution, All Rights Reserved
The contents in this site is copyrighted to Data Integration Solution and may not be reproduced on other websites.
Designed By: Blogger Templates | Templatelib