When we think of a data warehouse, we are typically thinking of having a database that store the data and need to design the schema and the ETL program to populate the database. If we take out the “database” from the above description, will it still be called a data warehouse?
Posted by Dylan Wan on September 18, 2016
Posted by Dylan Wan on March 17, 2016
Data warehousing is really about preparation of the data for reporting. The assumption are:
- You can predicate what typical queries look like to some extent.
- The data need to be prepared to make the query easier or faster, or make more sense from the data .
- You know where the data come from and you can Extract from the source
- You know what the target look like so you can Transform the data
- You Load the data somewhere so you do not need to query the source directly.
The future of data warehousing is related to whether the above assumptions are still true. Other factors are relating to technologies and the source data available. Read the rest of this entry »
Posted by Dylan Wan on December 16, 2015
Team-based security is referring to a specific data security requirement scenario.
Team-based security means that the object being secured has a “team” associated with it. The team members can access the objects. Read the rest of this entry »
Posted by Dylan Wan on November 30, 2015
The definition of multi-tenancy varies. Some people think that the tenants are data stripping and adding a tenant ID to every table is to support multiple tenants.
Posted by Dylan Wan on November 16, 2015
When do you need to use UNION in BI?
You need to use UNION when you would like to combine the data sets as a single data set. However, when do you exactly need to do this type of thing in BI or in data warehouse ETL? Here are some real business cases:
Posted by Dylan Wan on November 10, 2015
ODI Topology allows you to isolate the physical connection and the logical data source by defining the physical schema and logical schema.
This object may be seen as redundant during development. However, it is a very useful feature for supporting the Test to Production (T2P) process. Read the rest of this entry »
Posted by Dylan Wan on October 30, 2015
Almost all data warehouse have a date dimension. The purpose of the date dimension is to provide some pre-calculated grouping for dates. It helps rolling up the data that entered against dates to a higher level, such as year, quarter, month, week, etc.
In some system, source files are used in generating the date dimension. IMHO, it makes the modification to the logic difficult. In some ETL programs, the task involves various table joins, try to generate the rows for the year range.
This post is for describing how to populate a table with rows for each date for a given year range. Read the rest of this entry »
Posted by Dylan Wan on October 21, 2015
I just add a post about using initialization block in OBIEE.
It is a feature I felt excited when I first see the tool. Read the rest of this entry »
Posted by Dylan Wan on October 17, 2015
Here is a list of features available from Amazon QuickSight:
|Data Source||Connect to supported AWS data sources|
|Data Source||Upload flat files|
|Data Source||Access third-party data sources|
|Data Preparation||Data Preparation Tools|
|Visualization||Access all chart types|
|Data Access||Capture and Share, Collaborate|
|Data Access||API/ODBC connection to SPICE|
|Security||Encryption at Rest|
|Security||Active Directory Integration|
|Security||Fine-grained User Access Control|
|Security||Enable Audit Logs with AWS CloudTrail|
|Performance||In-memory calculation with SPICE|
|Performance||Scale to thousands of users|
|Performance||Support up to petabytes of data|
I categorize the features into these groups:
- Data Source
- Data Preparation
- Data Access (or Alternate Access)
They are almost same features available from other BI tools, such OBIEE, except the in-memory engine, and perhaps the scalability. Here are some questions I have. Read the rest of this entry »
Posted by Dylan Wan on October 14, 2015
Data Mashup is a new feature from OBIEE 12c.
It is one of the two main features that OBIEE 12c. The other one is the visual analyzer.
When I tested the data mashup features, it supports these two scenarios. Read the rest of this entry »
Posted by Dylan Wan on October 8, 2015
In my post Data Warehouses on Cloud – Amazon Redshift, I mentioned that what would be really useful is providing BI on Cloud, not just Data Warehouse on Cloud.
I felt that BICS makes more sense comparing to Amazon Redshfit.
I discussed with a couple of people last night in a meetup. Some of them are using Amazon Redshift. Here are what I heard: Read the rest of this entry »
Posted by Dylan Wan on October 5, 2015
Not all BI tools have the semantic layer. For example, Oracle Discoverer seems not having a strong semantic layer.
This page summarizes what OBIEE semantic layer can do for you…
Posted by Dylan Wan on October 4, 2015
These are different concepts.
Data Lake – Collect data from various sources in a central place. The data are stored in the original form. Big data technologies are used and thus the typical data storage is Hadoop HDFS.
Data Warehouse – “Traditional” way of collecting data from various sources for reporting. The data are consolidated and are integrated. A data warehouse design that follow the dimensional modeling technique may store data in star schema with fact tables and dimension tables. Typically a relational database is used.
Posted by Dylan Wan on October 2, 2015
DMZ is a technology that allows you to configure your network to be accessible outside firewall.
Some of users may want to access some of corporate reports from mobile or from their personal computers.
While VPN and Citrix may be useful for these cases, DMZ can provide another option.
Posted by Dylan Wan on September 22, 2015
I feel that these are the rules applicable for any cloud based data warehouse solution. In general, I feel that the on-premise data warehouse deployment probably will remain for a while.
1. For a columnar database, “select *” is bad
I think that the projection needs to be done as early as possible and should be pushed down.
If a column is not needed in the downstream flow, it should not be selected in the first place.
If the application logic is defined in the metadata, the tool should read it and generate the extract logic. Read the rest of this entry »
Posted by Dylan Wan on September 17, 2015
Here is a brief summary of what I learned by reading these materials.
1. The data warehouse is stored in clusters
It can support scale out, not scale up.
“Extend the existing data warehouse rather than adding hardware”
2. Use SQL to access the data warehouse
3. Load data from Amazon S3 (Storage Service) using MPP process
4. Partition / Distribute the data by time
“The BI team wanted to calculate some expensive analytics on a few years of data, so we just restored a snapshot and added a bunch of nodes for a few days”
Read the rest of this entry »
Posted by Dylan Wan on September 15, 2015
OTBI Enterprise is the BI cloud service, a SaaS deployment of OBIA. It is using a data warehouse based architecture. The ETL processes are handled within the cloud. The data are first loaded from either on premise or cloud sources using various means in the original formats. The data are first loaded into the SDS schema, then the regular ETL processes that were designed for on premise deployment can be used to populate the data warehouse. All these processes are supposed to be transparent to the deploying companies.
Posted by Dylan Wan on September 9, 2015
I attended a great meetup and this is the question I have after the meeting.
Perhaps the intent is to make it like a DBMS, like Oracle, or even a BI platform, like OBIEE?
The task flow it actually very similar to a typical database profiling and data analysis job.
1. Define your question
2. Understand and identify your data
3. Find the approach / model that can be used
Posted by Dylan Wan on September 3, 2015
Today I see a quite impressive demo in the Global Big Data Conference.
AtScale provides a BI metadata tool for data stored in Hadoop.
At first, I thought that this is just another BI tool that access Hadoop via Hive like what we have in OBIEE. I heard that that the SQL performance for BI query over Hive could be very slow. The typical issue is that when the query involves joins, the SQL join may be translated to map /reduce codes by Hive. Doing the Join in this way may not be as effective as the RDMBS.
However, the concept is actually very different here. Traditionally ROLAP is built on relational database and we use relational join between the fact table and the dimension table. When we see the Oracle-acquired tool like Endeca, we already see the data modeling principle changes. Endeca does not model data in star schema. It simply denormalizes dimension data into fact table. It can thus run query fast. AtScale seems doing exactly the same thing. When the data is stored in the Hadoop cluster, the data is not normalized by separating data into fact and dimension. It just stores as the data as the source and duplicating the dimension into fact. There is really no join here. The closest design technique in OBIEE I can think of is to use degenerated approach. However, will it work for using Hadoop as a source?
What really impressed me is the concept of Schema on Demand. I feel that this is actually the major challenge of ROLAP and relational database technology. When we model the potential additional attributes, we have to add placeholder columns to the relational table. However, in the data storage / database technology that store attributes as Key Value pairs or as Map, the data do not have to be stored as columns. This is actually nothing new. Oracle database has the VARRAY support since Oracle 8. However, there is no BI tool I am aware of can support this Oracle object type. While Oracle database has moved to not just supporting relational tables, the BI tool still make the assumption of supporting relational tables only.
It seems that AtScale solved this challenge by generating the metadata that can perform the attribute map to column transformation. I guess that we will be getting to see these big data technologies start getting into the traditional BI tool space. It is not due to the 3 Vs nature of the big data, it is due to the flexibility.
Posted by Dylan Wan on September 2, 2015
DRM is a generic data management application.
It provides a web based application that allows the deploying company to maintain the data.
It is a collaboration tool that allows you to define the validation and set up the data security duties and share the maintenance.
Earlier the tool was designed to maintain the account information. However, the tool actually can be used to maintain extensions to any dimension.
The key is that it can enable the deploying company to capture and maintain information that is out side the transaction system for BI reporting purpose.