So you've managed to Sqoop some data into HDFS. Great. What now? You'll want to query it. But what with? You don't know Java so writing a MapReduce job is right out. You know SQL so why not query the data using SQL. For that you'll need to use one of the many SQL on Hadoop tools that have emerged to make data analysis in Hadoop available to the large numbers of data professionals who know a variety of SQL
Since we're concerned here with Sqoop, we'll concentrate on Hive with a special mention for Impala. But there are other SQL on Hadoop tools out there which would satisfy most use cases.
Hive
Developed by Facebook as a Data Warehousing framework over Hadoop it provides a SQL-like language called HiveQL which is accessible to those with SQL knowledge. Hive converts queries to MapReduce jobs to be processed. Since MapReduce can be slow query times are also slow compared to SQL Server or other RDBMS's. Hive maintains a metastore which acts as metadata manager.
Impala
Developed by Cloudera as a MPP query engine over Hadoop Impala also provides an accessible SQL like language. Instead of using MapReduce Impala sends queries to agents installed on each data node in the cluster which then return their element of the query back to Impala. Impala removes MapReduce as a bottleneck by controlling the query execution itself and utilising the data nodes along with the Hive metastore to map out the query. This makes Impala many times faster than Hive with execution speeds nearing -
but not quite reaching - traditional RDBMS's. Impala is designed for quick data analysis but when used against large datasets memory issues can be a problem. In cases when you absolutely need the query to complete or have large volumes of data Hive is the best bet. In cases when you want an answer fast over smaller data sets Impala is the right choice.
Importing Data Into Hive
By adding the --hive-import option to your Sqoop import command line you can simply load the data into Hive. Sqoop will examine the metadata of the table being imported and create a table in Hive. The data types won't match exactly. Sqoop will approximate the data types by default but you can gain greater control over this with specific arguments. Also, NULL handling can be tricky but again you can define exactly how you want NULLs to be handled if you don't like results of the default Sqoop execution.
As Hive is designed to act like a Data Warehouse a common use case is to update a table periodically with new data. Hive can use partitions to store this new data more efficiently. The analogy here is with SQL Server clustered indexes as we're controlling how data is stored on disk to make the queries run faster.
Hive tables are stored on HDFS. Remember this is a write once
cluster with the potential to be read often by many possible tools so the data is written to disk in a format that can be read easily. Hive stores the schema in it's metastore and only at the point of query is the data interpreted against that schema. It's called schema on read. As the data is stored on HDFS we can use different file types to store the data in the most efficient manner. A common format used with Hive is Parquet which is optimised for data analysis work.
Basic Hive Import
The --hive-import argument for the import command provides an easy way of populating Hive.
sqoop import --connect "jdbc:sqlserver://10.88.12.0:1433;database=Utility;username=ClouderaAccess;password=GiveMeData" --table Customers --hive-import
Overwrite Existing Data
The --hive-overwrite option is analogous to a TRUNCATE and INSERT in SQL Server.
sqoop import --connect "jdbc:sqlserver://10.88.12.0:1433;database=Utility;username=ClouderaAccess;password=GiveMeData" --table Customers --hive-import --hive-overwrite
If you use the --hive-overwrite without --hive-import it won't work. The process will default to a HDFS load instead.
If the table has already been loaded into HDFS and you don't specify a target you'll get an error stating that the table already exists.
Creating Tables In Hive
If your data already exists in HDFS and you want to load it into Hive you will need to define the schema since Sqoop has no metadata to use during the load process to use to create the Hive table.
Hive CREATE TABLE syntax is similar to other SQL variants. Here is a simple example
CREATE TABLE [Table Name]
([Column Name 1] [DataType],
[Column Name 2] [DataType],
[Column Name 3] [DataType],
...
[Column Name n] [DataType])
STORED AS [File Type];
For more complex examples and a full syntax explanation look in the Hive Wiki here. This also contains information about data types which are different in Hive to SQL Server so beware.
To execute Hive code you need to open the Hive shell. Do this by typing Hive in the terminal session and you'll see the text before the cursor change to say hive>.
For our Customers table the create script to store the data in Parquet format would be as follows,
To execute Hive code you need to open the Hive shell. Do this by typing Hive in the terminal session and you'll see the text before the cursor change to say hive>.
hive> CREATE TABLE Customers
(CustomerID INT,
FirstName VARCHAR(50),
Surname VARCHAR(50),
DateOfBirth DATE,
EmailAddress VARCHAR(100),
PostCode VARCHAR(8),
MarketingOptIn TINYINT,
AutoRenewalOptIn TINYINT,
CreatedDateTime TIMESTAMP,
ModifiedDateTime TIMESTAMP)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
hive> LOAD DATA LOCAL INPATH "[Folder]/[FileName]" INTO TABLE Customers;
Handling NULLs
Sqoop and Hive handle NULLs differently by default so you need to explicitly tell Sqoop how to load the data into Hive. This is easily remedied using the two arguments --null-string and --null-non-string with the parameter '\\N' as shown below.
sqoop import --connect "jdbc:sqlserver://10.88.12.0:1433;database=Utility;username=ClouderaAccess;password=GiveMeData" --table Customers --hive-import --null-string ‘\\N’ --null-non-string ‘\\N’
Handling Delimiters
When importing into Hive using Sqoop you call SELECT count(*) FROM your_table to see how many rows were loaded. If the number of rows is larger than the source table you may have encountered a problem with delimiters.
The issue is caused when the data contains characters that Hive usees as delimiters, e.g. \n, \t, \01. This can arise when data has been input into text based columns with newline or tab characters included.
Sqoop can remove this characters from your data if you use the argument --hive-dropimport-
delims. For example,
sqoop import --connect "jdbc:sqlserver://10.88.12.0:1433;database=Utility;username=ClouderaAccess;password=GiveMeData" --table Customers --hive-import --hive-drop-import-delims
If the special characters are required you can use the argument --hive-delims-replacement which will substitute a replacement
string. Although how useful that would be in the data is not clear. It works like this,
sqoop import --connect "jdbc:sqlserver://10.88.12.0:1433;database=Utility;username=ClouderaAccess;password=GiveMeData" --table Customers --hive-import --hive-delims-replacement "SPECIAL"
Loading Into Partitions
If you're incrementally loading data into Hive perhaps as part of a daily load you can use partitions to separate the data but Hive still treats these separate blocks of data as one table. This could be useful if you want to swap out a days load that went wrong without reloading all the data.
Partitions also act like Clustered Indexes in SQL Server as the data is written to disk in sequential blocks making retrieval easier.
Sqoop supports Hive partitioning with two additional parameters: --hive-partition-key, the name of the partition column, and --hive-partition-value, the value. So if you partition on the column CreatedDate and you
're loading data for 31st May 2016 the value would be 2016-05-31 and the Sqoop command would read as follows;
sqoop import --connect "jdbc:sqlserver://10.88.12.0:1433;database=Utility;username=ClouderaAccess;password=GiveMeData" --table Customers --hive-import --hive-partition-key CreatedDate --hive-partition-value “2016-05-31”
Sqoop mandates that the partition column be of type STRING. The current implementation
is limited to a single partition level so you can't do nested partitions.
Using Impala
Using the Cloudera equivalent of SQL Server Management Studio called HUE you can run Impala queries over the data we've already loaded into Hive. As Impala relies on the Hive metastore only data which has been loaded into Hive can be queried using Impala.
To refresh Impala's knowledge of the Hive metastore after a data load you can run the Invalidate Metadata Impala command which is poorly named as it actually does a refresh and leaves Impala ready to be used. Use as follows,
Invalidate Metadata
No comments:
Post a Comment