Dylan's BI Study Notes

My notes about Business Intelligence, Data Warehousing, OLAP, and Master Data Management

Posts Tagged ‘BIAPPS’

Logical and Physical Schema in ODI

Posted by Dylan Wan on November 10, 2015

ODI Topology allows you to isolate the physical connection and the logical data source by defining the physical schema and logical schema.

This object may be seen as redundant during development.  However, it is a very useful feature for supporting the Test to Production (T2P) process.   Read the rest of this entry »

Advertisements

Posted in BI Application | Tagged: , | Leave a Comment »

OTBI Enterprise

Posted by Dylan Wan on September 15, 2015

OTBI Enterprise is the BI cloud service, a SaaS deployment of OBIA.  It is using a data warehouse based architecture.  The ETL processes are handled within the cloud.  The data are first loaded from either on premise or cloud sources using various means in the original formats.   The data are first loaded into the SDS schema, then the regular ETL processes that were designed for on premise deployment can be used to populate the data warehouse.  All these processes are supposed to be transparent to the deploying companies.

As of September 2015, OTBI-E for HCM is the only module that is available.  Read the rest of this entry »

Posted in Business Intelligence | Tagged: , , , , , , | Leave a Comment »

Market Segmentation and Data Mining

Posted by Dylan Wan on April 3, 2015

1. Market Segmentation in the academic world 

Market Segmentation is part of marketing process. It is described in Philip Kotler’s book as part of the step of defining the market strategy.The idea is to segment consumer market by some variables and to divide the market into different segments. Selecting the segments for your products is the result of the marketing strategy.  If you have multiple product lines, this definition process can be done for each product line if not done at the individual product level.

For example, my product X is targeted to sell to the consumers who are women, without kids, living in city, and having more than $90000 income.

This web page includes a very good and concise summary – Market Segmentation and Selection of Target Segments.  It reminded what I learned from my Marketing course in 20 years ago.

2. Marketing Segmentation as a product feature

Once the variables and the segments are defined as part of the marketing strategy, Read the rest of this entry »

Posted in Business Intelligence | Tagged: , , , | Leave a Comment »

How to integrate Oracle Data Mining and Oracle BI

Posted by Dylan Wan on April 2, 2015

Here are the various ways that we can use Data Mining inside BI.

We can build Advanced Analytics applications.

The scoring function can be called within the opaque view or with EVALUATE function.

The opaque view method may provide a better flexibility since multiple columns can be exposed.

Here is an old Oracle white paper about how to use EVALUATE in BI Server: white paper

Posted in Business Intelligence | Tagged: , , , , | Leave a Comment »

BI Apps has an Universal Data Warehouse

Posted by Dylan Wan on March 23, 2015

BI Apps data warehouse design is based on an assumption that the data warehouse schema design is independent from OLTP system.

The staging schema is an universal staging and the data warehouse is an universal data warehouse.

The assumption is that no matter what the source system you are using, the business questions the BI consumers have are similar.  You want to see revenue, cost, and profitability, regardless if you are using Oracle E-Business Suite, PeopleSoft, or J.D. Edwards, revenue are still revenue, cost are still cost.  The definitions of these metrics won’t vary too much from the OLTP systems you are using.

It is the BI Apps development team’s job to find where this numbers should be extracted from the source systems and to define the “map” between the source system and the target data warehouse system.

This universal data warehouse assumption does give BI Apps a unique advantage.  It allows the teams develop the source adaptor independently from the data warehouse design and thus independently from the BI metadata.

In another way to view this, the application logics built in the BI tool and the knowledge of designing the data warehouse schema can be reused over time and over many projects.

Adding the support of the 1st adaptor to E-Business Suite may take sometime to analyze the analytic requirements.  Adding the support of the PeopleSoft and JDE adaptors becomes a job of providing the source and target logical mapping and create the ETL jobs of loading the universal staging tables.

Posted in Business Intelligence | Tagged: , , | 1 Comment »

Extracting Data from Cloud Apps

Posted by Dylan Wan on March 5, 2015

I think that it would be easier if the cloud application can be aware of the data integration needs and publish the interfaces proactively.

Here are some basic requirements for the applications that can be considered as data integration friendly:

1. Publish the object data model

This is required for source analysis.

For example, here is the RightNow Object Model.

The documentation should describe the object, the attributes available, the identifiers, the validation rules, the relationships among the objects, including their cardinality, the domain or lookup used in validating the data, etc.

2. Support a scalable solution for the Full Extract

Initial load may require a full extract of the historical transaction data and reference data.  This is typically a one-time only event during implementation phase.

If the extract starts about the same time when the cloud apps is initially implemented, the volume of the initial extract may be similar to incremental extract.  However, if the Cloud BI is implemented much later, after the transational apps, the volume of initial load may be big.

3. Support incremental extract / Change Data Capture

The daily or near real-time extract requires a way to identify the delta, the data created or updated recently.

In many systems, this can be done via an auditing column, such as the last updated date column.  However, if the source object model allows a nested structure.  The last update date may need to be available in two forms, update the header only, or update any objects including the child and the descendant.

4. Delete Capture

Capturing the deleted data during the incremental extract is different from the creation and update cases. If the deleted rows are not extracted, the data will be kept in the data warehouse and be treated as a normal data.  The report could be considered wrong.

In some source systems, deletion and archiving are different events.  We need to have a way to differentiate these two events.

5. Metadata Extract

This is a more advanced feature.  It is required for supporting the custom objects.

In other normal objects, the expected data model is determined during the design time.

However, if the source system allows customization that can add or enable new attributes or even creating new objects, the extraction program needs to be able to detect the model change and react accordingly.

6. Extracting the Lookup or Domain data

Many systems support the multi-language using lookup table.  The idea is that the translated strings are kept in a separate table and allow the data shown on the application UI in different languages depending on the preferred language from the login user.  Not all source system can support this feature.  However, for the source systems that can support this feature, extracting the lookup data can allow the BI showing the data in the same translated as the source system.

The requirement for the extracting process is to extract the data in all languages that are expected in the BI / Data Warehouse side.

7. Extracting data for integrated data security

If the users that can access the cloud source apps and the users that can access BI are the same set of users, or a subset, the user and authentication service should be shared.

However, if the user or the internal person record identifiers are used in building the data security, extracting the users and the roles granted may be necessary.

8. Extracting the UI displayed Labels

In some cases, the cloud BI can be a seamless integrated or even embedded into the source apps.  The displayed lable for the object and for the attributes need to be consistent.  In such cases, extracting the display labels are also requried.

It may be similar to the process of extracting the lookup or domain data which have the data available in multiple languages.

Posted in Business Intelligence | Tagged: , , , , , | Leave a Comment »

Supply Chain Questions

Posted by Dylan Wan on September 28, 2014

The key question are about inventory and orders.

  • What is our inventory status? by product and by location?
    • Do we have enough inventory on hand and in process to meet customer demand?
    • How can we maximize customer satisfaction while minimizing cost and size of inventory on hand? Both over stocking and under stocking are bad
    • Do we have enough finished goods to our backlog orders?  How much raw materials, work in process, and finished goods on hand?
  • How well is our order fulfillment process performing?
    • What are the top orders in backlog?  How can we prioritize strategic customers?
    • How can we ensure fulfillment on time?
    • Which products have the longest fulfillment cycles?
  • Where are the bottlenecks in order management process?
    • What is our order management cycle time? How long does it take for booking an order?
  • Do we know why customers cancel orders and return products?
    • How much revenue was lost due to order cancellations or returns

Typical dimensions: By Product, By Location, By Customer

Different roles: Director of Sales Operations, Inventory Managers, Order Management Managers

Integration with other systems:

  • Customers and pipeline information from a CRM application
  • Product information from ERP
  • Workforce information from Human Resource
    • Align the training plans to skill gaps in critical product area
      • Correlate staffing levels with order volume and product plan

Posted in BI Application | Tagged: | Leave a Comment »

Incremental ETL in BI Apps 11.1.1.7.1 (1)

Posted by Dylan Wan on May 10, 2013

The data warehouse needs to be refreshed periodically to make sure that the data warehouse have the up to date information.  The Incremental Load is the regular ETL process running everyday.  The frequency can vary, but it is commonly seen as nightly job.  To ensure that the incremental ETL job can be finished within a limited time window, incremental ETL load typically identifies the changes happened in the source system and only deal with the delta to avoiding doing a Full ETL Load.

In BI Apps 11.1.1.7.1, if you are not leveraging the Golden Gate and the SDS (Source Dependent Data Store), the incremental ETL is accomplished by directly comparing the last update date captured in the source table with the last refreshed date.

BI Apps 11.1.1.7.1 ODI based ETL does not use separate ETL tasks for incremental load and full load.  The logic is determined by whether the data has been executed before.  In other word, it is based on whether the data has been refreshed.  This was a productivity gain for BI App developers and also make the support of incremental ETL becomes clear.  In the past, you may see the Incremental Load Informatica Workflow and Full Load.  Every time when there is a logic change, the workflows need to be modified.  It was a source of code bugs before.  By using a better ETL architecture, we eliminate the needs of maintaining two separate tasks.

Here is a high level summary of how the incremental ETL is accomplished in BI Apps 11.1.1.7.1 ETL:

1. Use the W_ETL_LOAD_DATES table for tracking all the ETL execution

2. Use the IS_INCREMENTAL Variable for holding and passing the state

3. Refresh Variable in a Package Step.  If there is an entry in the W_ETL_LOAD_DATES, perform an incremental ETL.

4. Use the RUN_INCREMENTAL to turn on and off the incremental filter

5. Optionally, use the ODI Variable Evaluate Step to branch the Full and Incremental ETL logic within a package

6. Table Maintenance, take the ETL Load Date information to determine the operations, including Drop Index, Truncate Table before Full Load, Rebuild the index after Load, and Analyze Table after Load, tec.

 

 

 

 

 

 

 

Posted in BI, BI Application, ETL, ODI, Oracle, Oracle Data Integrator | Tagged: , , , | Leave a Comment »