Dylan's BI Study Notes

My notes about Business Intelligence, Data Warehousing, OLAP, and Master Data Management

Posts Tagged ‘DAC’

Logical and Physical Schema in ODI

Posted by Dylan Wan on November 10, 2015

ODI Topology allows you to isolate the physical connection and the logical data source by defining the physical schema and logical schema.

This object may be seen as redundant during development.  However, it is a very useful feature for supporting the Test to Production (T2P) process.   Read the rest of this entry »

Advertisements

Posted in BI Application | Tagged: , | Leave a Comment »

Using Load Plan for managing your ETL task in BI Apps 11.1.1.7.1 (1)

Posted by Dylan Wan on May 14, 2013

One of the major change introduced in BI Apps 11.1.1.7.1 is the way how we manage the ETL task sequence and trim the unnecessary tasks.

This functionality was accomplished earlier using DAC.  The problem we frequently faced was that the DAC repository and the INFA repository are maintained as two separate repositories.  We have to sync up the name of tasks exactly in order to use DAC to manage the task execution of the Informatica workflow tasks.

Load Plan and Load Plan Generator was designed for addressing this requirement.

Here is a good article that describes the story.

Load Plan Generator – An Inside Look

Posted in BI Application, Business Intelligence, DAC, Data Warehouse, ETL, Infomatica, OBIA, ODI, Oracle, Oracle Data Integrator | Tagged: , , , | Leave a Comment »

Benefits of using ODI as the ETL for BI Apps

Posted by Dylan Wan on May 2, 2013

Oracle Data Integrator is being used as the ETL tool for BI Apps. 

Here are some benefits:

1. Productivity

It tasks less efforts for developers to write a ETL map comparing  to other ETL tool.

2. Define ETL flow patterns in KM

The reason why now we can take less effort is that the flow is now inside the knowledge module.  You pick which flow logic you want to use by picking which KM to use.  You configure the flow for your specific requirements by specifying the KM options. The rest is to define the source SQL and the mapping expression.  It pretty much that it is for any typical job.

In a prebuilt ETL adaptor solution, like what BI Apps has, we have a set of patterns.  Unless the specific task requirement is unique, you do not need to write any new KM and any complex logic.

3. Code Generation

ODI provides SDK.  Although the ETL programs themselves are declarative in the ODI repository, the ETL program can be generated by using the ODI SDK.  Not just the ETL flow logic is centralized inside KM, the ETL meta can also be generated.

4. User Defined Function

User Defined Function is one of the magic behind supporting different platforms.  You use the same function and do not need to worry about the different SQL function supported in different database platforms, ODI engine will compile and replace it with different implementation for you.

It is not a required feature for building a custom ETL.  It is a required feature for prebuilt ETL adaptors which need to support source apps, such as Siebel Apps, JDE Apps, PeopleSoft Apps in different database platforms.

5. Data Lineage

The data lineage within ETL tool becomes so easy.  You can easily identify which ETL task populates a given table column, and for a given source table column which ETL tasks are using it as the source.

Actually although it may not be recommended the ODI metadata repository can be queries directly from database. It makes ETL auditing becomes possible.  It is a productivity grain since you can spend less quality assurance time to detect issues and it is a quality gain as you get the ETL adaptor with high quality.

6. Performance

When the source data and target data are within the same database, the middle tier ETL engine becomes an overhead.  We can better leverage the power from the database for ETL transformation.

This is the benefit we expect from a E-L-T solution.

For cross database deliveries, many optimization options can be used.  A simple database link over different database instances can be used.  Also using Oracle Golden Gate to replicate the data is also possible.  With the above benefits of KM and UDF, the same ETL codes can be switched to run the extract process against the replicated data or run against the remote source apps database.

7. Load Plan

The ETL task dependencies and trimming can be accomplished via ODI Load Plan within the same ODI repository.  This architecture design eliminates the needs of duplicating the ETL task metadata to a separate task execution sequencing and management tool, like BI Apps Data Warehouse Administration Console.

You can define your ETL job, execute your ETL job, monitor your ETL jobs within a single tool, the same ODI studio.

(to be continued…)

Posted in Business Intelligence | Tagged: , , , , , , , , | Leave a Comment »