DW/BI/ETL Testing Training Course is designed for both entry-level and advanced Programmers. The course includes topics related to the foundation of Data Warehouse with the concepts, Dimensional Modeling and important aspects of Dimensions, Facts and Slowly Changing Dimensions along with the DW/BI/ETL set up, Database Testing Vs Data Warehouse Testing, Data Warehouse Workflow and Case Study, Data Checks using SQL, Scope of BI testing and as a bonus you will also get the steps to set up the environment with the most popular ETL tool Informatica to perform all the activities on your personal computer to get first hand practical knowledge.
In this lecture we talk about the layout of the course and what is covered and how to get the best out of this course.
ETL is commonly associated with Data Warehousing projects but there in reality any form of bulk data movement from a source to a target can be considered ETL. ETL testing is a data centric testing process to validate that the data has been transformed and loaded into the target as expected.
In this lecture we also talk about data testing and challenges in ETL testing.
This is one of the common questions which is asked by most of the non-Java/Big Data IT professionals about their current technologies and the future of it.
Especially, when it comes to the ETL or the DW world, the future would be better than ever since "Big Data" would help increase the requirement of better processing of data & these tools excel in doing that.
The original intent of the data warehouse was to segregate analytical operations from mainframe transaction processing in order to avoid slowdowns in transaction response times, and minimize the increased CPU costs accrued by running ad hoc queries and creating and distributing reports. Over time, the enterprise data warehouse became a core component of information architectures, and it's now rare to find a mature business that doesn't employ some form of an EDW or a collection of smaller data marts to support business intelligence, reporting and analytics applications.
In this lecture we see what will be the future of Data warehouse in the age of Big Data.
Data is a collection of raw material in unorganized format. which refers an object.
The concept of data warehousing is not hard to understand. The notion is to create a permanent storage space for the data needed to support reporting, analysis, and other BI functions. In this lecture we understand what are the main reasons behind creating a data warehouse and the benefits of it.
This long list of benefits is what makes data warehousing an essential management tool for businesses that have reached a certain level of complexity.
A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources.
In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users.
Test your understanding on the Data Warehouse basics
The data mart is a subset of the data warehouse that is usually oriented to a specific business line or team. Data marts are small slices of the data warehouse. Whereas data warehouses have an enterprise-wide depth, the information in data marts pertains to a single department.
Data Warehouse:
Data Mart:
The primary advantages are:
Disadvantages of Data Marts are discussed in this lecture.
This lecture talks about the mistakes and the mis-conceptions one have with regard to the Data warehouse.
Test your understanding on the Data Mart Concepts
In this lecture we see how the Centralized architecture is set up, in which there exists only one data warehouse which stores all data necessary for the business analysis.
In a Federated Architecture the data is logically consolidated but stored in separate physical database, at the same or at different physical sites. The local data marts store only the relevant information for a department.
The amount of data is reduced in contrast to a central data warehouse. The level of detail is enhanced in this kind of model.
A Multi Tired architecture is a distributed data approach. This process cannot be done in a one step because many sources have to be integrated into a warehouse.
Different data warehousing systems have different structures. Some may have an ODS (operational data store), while some may have multiple data marts. Some may have a small number of data sources, while some may have dozens of data sources. In view of this, it is far more reasonable to present the different layers of a data warehouse architecture rather than discussing the specifics of any one system.
In general, all data warehouse systems have the following layers:
This is where data is stored prior to being scrubbed and transformed into a data warehouse / data mart. Having one common area makes it easier for subsequent data processing / integration. Based on the business architecture and design there can be more than one staging area which can be termed with different naming conventions.
Test your understanding on the Data Warehouse Architecutre
Data modeling is the formalization and documentation of existing processes and events that occur during application software design and development.
The below aspects will be discussed in this lecture.
•Functional and Technical Aspects
•Completeness in the design
•Understanding DB Test Execution
•Validation
Data modeling techniques and tools capture and translate complex system designs into easily understood representations of the data flows and processes, creating a blueprint for construction and/or re-engineering.
An entity–relationship model (ER model) is a data model for describing the data or information aspects of a business domain or its process requirements, in an abstract way that lends itself to ultimately being implemented in a database such as a relational database.
A Dimensional Model is a database structure that is optimized for online queries and Data Warehousing tools. It is comprised of "fact" and "dimension" tables. A "fact" is a numeric value that a business wishes to count or sum. A "dimension" is essentially an entry point for getting at the facts.
In this lecture we talk about the differences between ER model and the Dimensional Model.
To build a Dimensional Model we need to follow five different phases
Data Modelers have to interact with business analysts to get the functional requirements and with end users to find out the reporting needs.
This model includes all major entities, relationships. But, this will not contain much detail about attributes and is often used in the initial planning phase.
In this phase the actual implementation of a conceptual model in a logical data model will happen. A logical data model is the version of the model that represents all of the business requirements of an organization.
This is a complete model that includes all required tables, columns, relationships, database properties for the physical implementation of the database.
DBA's or ETL developers prepare the scripts to create the entities, attributes and their relationships.
In this lecture we also talk about the reusable database script creation process which can be reused for multiple times.
On how to create a Dimensional Data Model
A dimension is a structure that categorizes facts and measures in order to enable users to answer business questions. Commonly used dimensions are people, products, place and time. In a data warehouse, dimensions provide structured labeling information to otherwise un-ordered numeric measures.
In data warehousing, a fact table consists of the measurements, metrics or facts of a business process. It is often located at the center of a star schema, surrounded by dimension tables.
There are four types of facts.
There are four types of facts.
The numeric measures in a fact table fall into three categories. The most flexible and useful facts are fully additive; additive measures can be summed across any of the dimensions associated with the fact table. Semi-additive measures can be summed across some dimensions, but not all; balance amounts are common semi-additive facts because they are additive across all dimensions except time.
A star schema is the simplest form of a dimensional model, in which data is organized into facts and dimensions.
The snowflake schema is diagrammed with each fact surrounded by its associated dimensions (as in a star schema), and those dimensions are further related to other dimensions, branching out into a snowflake pattern.
Galaxy schema also know as fact constellation schema because it is the combination of both of star schema and Snow flake schema.
When choosing a database schema for a data warehouse, snowflake and star schema tend to be popular choices. This comparison discusses suitability of star vs. snowflake schema in different scenarios and their characteristics.
A conformed dimension is a dimension that has exactly the same meaning and content when being referred from different fact tables. A conformed dimension can refer to multiple tables in multiple data marts within the same organization.
In a Junk dimension, we combine these indicator fields into a single dimension. This way, we'll only need to build a single dimension table, and the number of fields in the fact table, as well as the size of the fact table, can be decreased.
According to Ralph Kimball, in a data warehouse, a degenerate dimension is a dimension key in the fact table that does not have its own dimension table, because all the interesting attributes have been placed in analytic dimensions. The term "degenerate dimension" was originated by Ralph Kimball.
A single physical dimension can be referenced multiple times in a fact table, with each reference linking to a logically distinct role for the dimension. For instance, a fact table can have several dates, each of which is represented by a foreign key to the date dimension.
Slowly Changing Dimensions (SCD) - dimensions that change slowly over time, rather than changing on regular schedule, time-base.
There are many approaches how to deal with SCD. The most popular are:
Dimension, Fact and SCD Type 1, 2 and 3 are reviewed in this lecture.
Test your understanding on Dimensional Model Objects
Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information. A complete data integration solution delivers trusted data from a variety of sources.
ETL is short for extract, transform, load, three database functions that are combined into one tool to pull data out of one database and place it into another database.
Extract is the process of reading data from a database.
Transform is the process of converting the extracted data from its previous form into the form it needs to be in so that it can be placed into another database. Transformation occurs by using rules or lookup tables or by combining the data with other data.
Load is the process of writing the data into the target database.
ETL is used to migrate data from one database to another, to form data marts and data warehouses and also to convert databases from one format or type to another.
The process of extracting the data from different source (operational databases) systems, integrating the data and transforming the data into a homogeneous format and loading into the target warehouse database. Simple called as ETL (Extraction, Transformation and Loading). The Data Acquisition process designs are called in different manners by different ETL vendors.
Data transformation is the process of converting data or information from one format to another, usually from the format of a source system into the required format of a new destination system.
In this lecture we discuss on what are the common questions which are raised for Data Integration and ETL.
Test your understanding on Data Integration and ETL
ETL is short for extract, transform, load, three database functions that are combined into one tool to pull data out of one database and place it into another database.
Extract is the process of reading data from a database.
Transform is the process of converting the extracted data from its previous form into the form it needs to be in so that it can be placed into another database. Transformation occurs by using rules or lookup tables or by combining the data with other data.
Load is the process of writing the data into the target database.
ETL is used to migrate data from one database to another, to form data marts and data warehouses and also to convert databases from one format or type to another.
ELT is a variation of the Extract, Transform, Load (ETL), a data integration process in which transformation takes place on an intermediate server before it is loaded into the target.
ELT makes sense when the target is a high-end data engine, such as a data appliance, Hadoop cluster, or cloud installation to name three examples. If this power is there, why not use it?
ETL, on the other hand, is designed using a pipeline approach. While data is flowing from the source to the target, a transformation engine (something unique to the tool) takes care of any data changes.
Which is better depends on priorities. All things being equal, it’s better to have fewer moving parts. ELT has no transformation engine – the work is done by the target system, which is already there and probably being used for other development work. On the other hand, the ETL approach can provide drastically better performance in certain scenarios. The training and development costs of ETL need to be weighed against the need for better performance. (Additionally, if you don’t have a target system powerful enough for ELT, ETL may be more economical.)
Project sponsorship is an active senior management role, responsible for identifying the business need, problem or opportunity. The sponsor ensures the project remains a viable proposition and that benefits are realized, resolving any issues outside the control of the project manager.
This person will oversee the progress and be responsible for the success of the data warehousing project.
The role of the business analyst is to perform research and possess knowledge of existing business applications and processes to assist in identification of potential data sources, business rules being applied to data as it is captured by and moved through the transaction processing applications, etc. Whenever possible, this role should be filled by someone who has extensive prior experience with a broad range of the organization's business applications.
A subject-matter expert (SME) or domain expert is a person who is an authority in a particular area or topic. The term domain expert is frequently used in expert systems software development, and there the term always refers to the domain other than the software domain.
Data Warehouse Architect: These job responsibilities encompass definition of overall data warehouse architectures and standards, definition of data models for the data warehouse and all data marts, evaluation and selection of infrastructure components including hardware, DBMS, networking facilities, ETL (extract, transform and load) software, performing applications design and related tasks.
Data Modeler: The person(s) in this role prepares data models for the source systems based on information provided by the business and/or data analysts. Additionally, the data modeler may assist with the development of the data model (s) for the EDW or a data mart guided by the data warehouse architect. This individual may also assist in the development of business process models, etc.
This position is responsible for maintaining hardware reliability, system level security, system level performance monitoring and tuning, and automation of production activities including extract and load functions, repetitively produced queries/reports, etc. The duties include the setup of user IDs and system access roles for each person or group which is given access to the data warehouse or data mart and monitoring the file system for space availability. In many cases, the system administrator is responsible for ensuring that appropriate disaster recovery functions such as system level backups are performed correctly and on an accepted schedule.
The person or persons functioning within this role will need a substantial understanding of the data warehouse design, load function, etc. Potentially the DW developer may also be required to have some knowledge of the tools and programs used to extract data from the source systems and perform maintenance on those applications. Additionally the ETL Developer may be required to be knowledgeable in the data access tools and perform some data access function development.
In this lecture, we talk about the roles of the reporting team members who create static dashboards and reporting structures.
If your project is large enough to require dedicated resources for system administration and database administrators (DBAs), it is possible you will want a person who will provide leadership and direction for these efforts. This would be someone who is familiar with the hardware and software likely to be used, experienced in administration of these areas and who can direct tuning and optimization efforts as warehouse development and use moves forward in the organization. Including the infrastructure team within the large data warehousing group helps ensure that the needed resources are available as needed to ensure that the project stays on track and within budget.
A data architect is a practitioner of data architecture, an information technology discipline concerned with designing, creating, deploying and managing an organization's data architecture.
A data warehouse architect does a lot more than just data modelling. They also are responsible for the data architecture, ETL, database platform and physical infrastructure.
A business intelligence architect (BI architect) is a top-level business intelligence analyst who deals with specific aspects of business intelligence, a discipline that uses data in certain ways and builds specific architectures to benefit a business or organization. The business intelligence architect will generally be responsible for creating or working with these architectures, which serve the specific purpose of maximizing the potential of data assets.
Systems architects define the architecture of a computerized system (i.e., a system composed of software and hardware) in order to fulfill certain requirements.
Solution architecture is a practice of defining and describing an architecture of a system delivered in context of a specific solution and as such it may encompass description of an entire system or only its specific parts. Definition of a solution architecture is typically led by a solutions architect.
An enterprise architect is a person responsible for performing this complex analysis of business structure and processes and is often called upon to draw conclusions from the information collected.
Please note that the roles explained above are not limited only to the list or not mandatory to every project. The roles creation and selection depends on the project's architecture and the business flow. A single role mentioned in here can be split into more than one or couple of roles can be merged into one based on the requirement.
Test your understanding on on different roles in a DWH project
A quick recap of the different phases which are involved in most of the DW/BI/ETL projects.
In this lecture we talk about the key feature of knowledge sharing sessions before the requirements are being gathered.
A critical early activity is requirement creation or the BRD (Business Requirement Document) creation. Requirements gathering sounds like common sense, but surprisingly, its an area that is given far too little attention.
In this lecture we talk about the BRD's best practices and common mistakes to avoid.
Architecture phases's importance and the dependency on the previous two phases is explained in this lecture.
Once the Architecture phase is complete, the Data Model/Database phase will convert the Conceptual Data model to the Logical data model and then to the Physical data model.
In this lecture we will know about the ETL phase and the how this phase takes 70% of the overall project implementation time.
Data Access is the OLAP layer or the Reporting layer. There are multiple ways the Data can be accessed. Here are few of them.
Each of these are discussed further in detailed in the next lectures.
Selection is the most common and important feature of any OLAP tool.
Drilling down through a database involves accessing information by starting with a general category and moving through the hierarchy: from category to file/table to record to field.
When one drills down, one performs de facto data analysis on a parent attribute. Drilling down provides a method of exploring multidimensional data by moving from one level of detail to the next. Drill-down levels depend on the data granularity.
Exception reporting eliminates the need to review countless reports to identify and address key business process issues before they begin to negatively impact a firm’s operations or profitability.
In this lecture, we talk about the measures of the facts and how these are calculated based on the business validations and requirements.
Visualization is the process of representing data graphically and interacting with these representations in order to gain insight into the data. Traditionally, computer graphics has provided a powerful mechanism for creating, manipulating, and interacting with these representations.
OpenCourser helps millions of learners each year. People visit us to learn workspace skills, ace their exams, and nurture their curiosity.
Our extensive catalog contains over 50,000 courses and twice as many books. Browse by search, by topic, or even by career interests. We'll match you to the right resources quickly.
Find this site helpful? Tell a friend about us.
We're supported by our community of learners. When you purchase or subscribe to courses and programs or purchase books, we may earn a commission from our partners.
Your purchases help us maintain our catalog and keep our servers humming without ads.
Thank you for supporting OpenCourser.