Real Time ETL

What is Real Time ETL? What does it mean? This question keeps coming up in discussions with customers and prospects, for enterprises large and small, and with tool jockeys and home grown coders. It surfaces in debates about EAI vs ETL (subject for another blog), Changed Data Capture, transactional vs batch processing, and more. I won’t debate the definitions of real-time, right-time, real-time data warehousing, active data warehousing, just-in-time or near-real-time — a lot of really smart people have already been there. I just want to look at what people are actually doing, and calling, Real Time ETL.

Trying to formally define real time isn’t easy — there are so many points of view, and critical differences based on industry segment. Those of us in the commercial “data world” spend lots of time discussing the finer points of “real time”….however, I stopped trying to come up with a single definition after reading pure academic and engineering definitions of “real-time computing” that talked about robotic arms in an assembly line reacting in microsecond “real time” to things like minute temperature changes!

I’d like to reflect here instead on the technical aspects of common patterns that those of us in the data integration space run into regarding Real-Time ETL, and mention some of the gotchas that often go overlooked. I see four basic “patterns” that, depending on your point of view and problem you are trying to solve, qualify as Real Time ETL:

*    Frequently executed ETL processes (ie. every 5 minutes, one minute, or every 10 seconds). Really a “batch” pattern, but run in small windows with tiny (by comparison to large batch loads) quantities of data.
**   Messaging or other “continually live” medium as a Source.
***  Messaging or other “continually live medium as a Target.
**** Request/Response with a continually live medium on either end (Source and Target).

The second one above interests me right now, as I’ve had numerous questions on this subject in the past few days. I want to speak here about the technical definition for jobs, maps, procedures (or whatever you call your ETL processes) that need to “read” data from a commonly accepted “real time” technology. Real time sources may be popular messaging engines, such as MQSeries, TIBCO Rendevous, or MSMQ, or java based standards such as JMS, or more custom based solutions such as sockets or even named pipes. Most ETL tools can access these, or provide extensions that make it possible to utilize some of the lesser known APIs.

This is the most commonly requested pattern. When someone says “I need Real-Time ETL,” it generally turns out that they want to “read” from such a source. Reasons for needing it vary. Some sites desire immediate updates to decision support systems or portals, while others are merely “dipping” into an available source that is passing through for other purposes. An already built MQ Series infrastructure, shipping messages between applications, are often the perfect source of data for ETL, whether the objective is immediate updates or not. It’s just “there” and available…and simpler to get than trying to wrestle with security folks for access to source legacy systems. Of course there are hundreds of variants, whether the target is decision support oriented (data warehouse or datamart), or ERP (such as SAP). Either way I’m talking about a persistent target.

Regardless of the reasons, such ETL processes have to deal with issues like the following:

*   Always On. Typically an initialization issue. ETL tools do a lot of preparation when they start…they validate connections, formally “PREPARE” their SQL, load data into memory, establish parallel processes, etc. Twenty seconds of initialization may be acceptable in a 45 minute batch job that processes ½ gigabyte. In a real time scenario, that’s unacceptable. You can’t afford to perform all of that initialization for every message or packet….it needs to be done once, then leave the process “always on” and waiting for new data. I like to think of it “floating” while it waits. Of course, this invites other problems…
**  End-of-file processing for “blocking” functionality. If you have an “always on” job, what do you do if someone wants to use an aggregation or sum() function? How does the process know when it’s finished and can flush rows thru such an operation? This is particularly critical when we move on to Web Services in the request/response pattern, but equally important when reading messages that contain multiple rows, such as when the message payload is a complex XML document.
*** Live vs buffered or in-memory lookups. A common technique for performance in large volume batch processes is to bring values into memory. Same issues for performance in “always on” jobs, but consider that “always on” means needing a strategy to refresh that in-memory copy. Or else ensure that a constant connection to the original source is feasible and performs well….and that the DBA who owns the real time source won’t kill your long running database connection in an “always on” scenario.

These aren’t the only issues, and there are numerous ways of dealing with them. Make sure the tool or techniques you choose give you ways to deal with these problems.

Indexes. Classic ETL loading procedures invite age-old techniques like removing indexes before bulk loads, then rebuilding them later for improved load times. In the 24 by 7 “real time” pattern, the indexes are usually left “on.” There may not be any reasonable time to stop the loading and then re-create them (as there would be during a scheduled batch window), or the ETL procedure itself may need certain indecies for lookup purposes. Other applications running concurrently may also need them, and perhaps most important, the real-time “trickle feed” ETL pattern is often chosen to avoid the need for a one-time huge batch load…. rows are coming in all day long instead of piling up, and the performance hit is not as great.

These include determining:
*   Whether it is better to use an ETL suite of tools or hand-code the ETL process with available resources.
**  If batch processing will provide the data in a timely manner.
*** How much of the ETL process will be automated with schedulers, alert notifications and work flow procedures.

Rationale:
Certain design elements are a fundamental and necessary first decision in the development of an ETL system. These choices affect everything and a change in these elements can mean implementing the entire system over again from the very start. The key to applying these design elements is to apply them consistently.

Benefits:
By addressing these design elements, we ensure that the ETL system can do the following:
*   Deliver data most effectively to end user tools
**  Add value to data in the cleaning and conforming steps
*** Protect and document the lineage of data

Batch vs. Streaming Data Flow:
The standard design for an ETL system is based on periodic batch extracts from the source data, which then flows through the system, resulting in a batch update to the data exported from the ETL system. However, when the real-time nature of the data exported becomes sufficiently urgent, it may be necessary to implement a streaming data flow in which the data at the record level continuously flows from the extraction process to the data exported from the system.

Scheduler Automation:
It must be determined how deeply to control the overall ETL system with automated scheduler technology. At one extreme, all jobs are manually controlled and executed. At the other extreme, a master scheduler tool manages all the ETL jobs, statuses, alerts and flow processes.

Exception Handling:
Exception handling should not be a random series of alerts or comments placed in files but rather should be a system-wide, uniform mechanism for reporting all instances of exceptions created by the ETL processes into a single database, with the name of the process, the time of the exception, its initially diagnosed severity, the action subsequently taken and the ultimate resolution status of the exception.

Quality Handling:
All quality problems need to generate an audit record attached to the final dimension or fact data. Corrupted or suspected data needs to be handled with a small number of uniform responses, such as filling in missing text data with a question mark or supplying least biased estimators of numeric values that exist.

Recovery & Restart:
You need to build your ETL system around the ability to recover from abnormal ending of a job and restart. ETL jobs need to be reentrant, otherwise impervious to incorrect multiple updating.

Metadata:
In ETL, a metadata repository is where all the metadata information about source, target, transformations, mapping, workflows, sessions etc, are stored. From this repository, metadata can be manipulated, queried and retrieved with the help of wizards provided by metadata capturing tools. During the ETL process, when we are mapping source and target systems, we are actually mapping their metadata. A useful metadata fact stored in a repository can be a handy resource to know about the organization’s data systems. Assume that each department in an organization may have different business definitions, data types, attribute names for the same attribute or they may have a single business definition for many attributes. These anomalies can be overcome by properly maintaining metadata for these attributes in the centralized repository. Thus, metadata plays a vital role in explaining about how, why, where data can be found, retrieved, stored and used efficiently in an information management system.

Security:
Physical and administrative safeguards need to surround every on-line table and backup tape in the ETL environment. Archived data sets should be stored with checksums to verify that they have not been altered in any way.

I will write next subject regarding Data Lineage. What is Data Lineage? please let me know if you find this useful, any – or + comments will be learning point.

Mehboob

MCTS & MCITP

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: