Wednesday, 1 May 2013

Big Data in the Real World - Part 1 of 2

There is so much talk of Big Data nowadays, mostly from sales people but also from technologists. Karen Lopez aka DataChick took issue at the Wikipedia entry on Big Data which describes it as "data sets so large and complex that it becomes difficult to process using on-hand database management tools or traditional data processing applications". I think that definition is pretty good.

Over the last few months I've tackled a problem which within my environment counts as a Big Data problem. The solution is outlined in these next 2 articles.

Clickstream Conversion Data Collection Process

The daily process

Clickstream data is collected on a schedule, currently daily, via an SSIS package which is initiated by a step in a SQL Agent job. The underlying package uses the timestamp of the last successful load as a starting point for the next process which allows flexibility in the collection schedule.

The data source

All clickstream data is collected in a Key-Value-Pair (KVP) database. There are 3 main tables used to store the KVP data. Those tables are;

  • LogFlow - information about users web journey 
  • LogQuote - all data related to a quote 
  • LogSales - all data related to a sale 

Only the last 5 months of data stored. This is a hangover from legacy environment where storage was limited. We wish to retain all the data so we collect it each night and store it in our Data Warehouse.

Staging data

As we perform a number of actions on the data once collected and we don’t want to run those actions over highly transactional replicated tables a set of staging tables exists which mirror exactly the schema on both out production and replication servers. Diagram 1 shows the SSIS steps for this section of the process. The staging tables are truncated to remove data from the last process before data flow tasks collect the data since the last successful load. This achieved using a view of the source table filtered to return only the rows created after the last datetime in the process control metadata.

The KVP data is large due to the volume of transactions and because the values for each row are contained in NVARCHAR(1000) format. For each action on our website over 150 rows are currently written to database. Often the data in the NVARCHAR(1000) key value column only uses a fraction of that space. It’s a drawback of the KVP implementation and as a result the database is approximately 90% smaller when converted to a relational schema.

Schema processing

Following on from a constraint for the KVP implementation we have an advantage. The KVP key names can be added to at any time which allows us to collect new data without extensive reconfiguration work. However, this means when collecting the data we cannot transform it into a rigid schema because the schema may change.

In addition, we can’t constrain the data within the key values to definite data types as this may also change therefore we continue to store the data – at least during collection – as NVARCHAR columns.

Finally the length of the data within the NVARCHAR columns can vary. We know the maximum lengths in the current data for each key name but that could change in future consequently we have to allow for potential changes.



To account for these data constraints we must scan the incoming data for new key names and key values which are larger than previously seen before each data load. If we detect new key names we must create a column to store that data from now on. If we detect a key value has been collected which is larger than the current NVARCHAR length in the storage schema then we must expand that column. These two actions require prior knowledge of the key names and the length of the key values which we store in a single metadata table. This table continues to be used throughout the remainder of the collection process.

This post is continued in part 2.

No comments:

Post a Comment