Hive支持的文件格式與壓縮算法
概述
只要是配置了正確的文件類型和壓縮類型(比如Textfile+Gzip、SequenceFile+Snappy等),Hive都可以按預期讀取并解析數據,提供SQL功能。
SequenceFile本身的結構已經設計了內容進行壓縮。所以對于SequenceFile文件的壓縮,并不是先生成SequenceFile文件,再對文件進行壓縮。而是生成SequenceFile文件時,對其中的內容字段進行壓縮。最終壓縮后,對外仍體現為一個SequenceFile。
RCFile、ORCFile、Parquet、Avro對于壓縮的處理方式與SequenceFile相同。
文件格式
- Textfile
- SequenceFile
- RCFile
- ORCFile
- Parquet
- Avro
壓縮算法的編解碼器
TEXTFILE
- --創建一個表,格式為文本文件:
- CREATE EXTERNAL TABLE student_text (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --導入數據到此表中,將啟動MR任務
- INSERT OVERWRITE TABLE student_text SELECT * FROM student;
可查看到生成的數據文件的格式為非壓縮的文本文件:
- hdfs dfs -cat /user/hive/warehouse/student_text/000000_0
- 1001810081,cheyo
- 1001810082,pku
- 1001810083,rocky
- 1001810084,stephen
- 2002820081,sql
- 2002820082,hello
- 2002820083,hijj
- 3001810081,hhhhhhh
- 3001810082,abbbbbb
文本文件,DEFLATE壓縮
- --創建一個表,格式為文件文件:
- CREATE TABLE student_text_def (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --設置壓縮類型為Gzip壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec;
- --導入數據:
- INSERT OVERWRITE TABLE student_text_def SELECT * FROM student;
- --查看數據
- SELECT * FROM student_text_def;
查看數據文件,可看到數據文件為多個.deflate文件。
- hdfs dfs -ls /user/hive/warehouse/student_text_def/
- -rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000000_0.deflate
- -rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000001_0.deflate
- -rw-r--r-- 2015-09-16 12:48 /user/hive/warehouse/student_text_def/000002_0.deflate
文本文件,Gzip壓縮
- --創建一個表,格式為文件文件:
- CREATE TABLE student_text_gzip (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --設置壓縮類型為Gzip壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
- --導入數據:
- INSERT OVERWRITE TABLE student_text_gzip SELECT * FROM student;
- --查看數據
- SELECT * FROM student_text_gzip;
查看數據文件,可看到數據文件為多個.gz文件。解開.gz文件,可以看到明文文本:
- hdfs dfs -ls /user/hive/warehouse/student_text_gzip/
- -rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000000_0.gz
- -rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000001_0.gz
- -rw-r--r-- 2015-09-15 10:03 /user/hive/warehouse/student_text_gzip/000002_0.gz
文本文件,Bzip2壓縮
- --創建一個表,格式為文件文件:
- CREATE TABLE student_text_bzip2 (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --設置壓縮類型為Bzip2壓縮:
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.BZip2Codec;
- --導入數據
- INSERT OVERWRITE TABLE student_text_bzip2 SELECT * FROM student;
- --查看數據:
- SELECT * FROM student_text_bzip2;
查看數據文件,可看到數據文件為多個.bz2文件。解開.bz2文件,可以看到明文文本:
- hdfs dfs -ls /user/hive/warehouse/student_text_bzip2
- -rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000000_0.bz2
- -rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000001_0.bz2
- -rw-r--r-- 2015-09-15 10:09 /user/hive/warehouse/student_text_bzip2/000002_0.bz2
文本文件,lzo壓縮
- --創建表
- CREATE TABLE student_text_lzo (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --設置為LZO壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec;
- --導入數據
- INSERT OVERWRITE TABLE student_text_lzo SELECT * FROM student;
- --查詢數據
- SELECT * FROM student_text_lzo;
查看數據文件,可看到數據文件為多個.lzo壓縮。解開.lzo文件,可以看到明文文本。
未實測,需要安裝lzop庫
文本文件,lz4壓縮
- --創建表
- CREATE TABLE student_text_lz4 (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --設置為LZ4壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.Lz4Codec;
- --導入數據
- INSERT OVERWRITE TABLE student_text_lz4 SELECT * FROM student;
查看數據文件,可看到數據文件為多個.lz4壓縮。使用cat查看.lz4文件,可以看到是壓縮后的文本。
- hdfs dfs -ls /user/hive/warehouse/student_text_lz4
- -rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000000_0.lz4
- -rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000001_0.lz4
- -rw-r--r-- 2015-09-16 12:06 /user/hive/warehouse/student_text_lz4/000002_0.lz4
文本文件,Snappy壓縮
- --創建表
- CREATE TABLE student_text_snappy (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS TEXTFILE;
- --設置壓縮
- SET hive.exec.compress.output=true;
- SET mapred.compress.map.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
- SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;
- --導入數據
- INSERT OVERWRITE TABLE student_text_snappy SELECT * FROM student;
- --查詢數據
- SELECT * FROM student_text_snappy;
查看數據文件,可看到數據文件為多個.snappy壓縮文件。使用cat查看.snappy文件,可以看到是壓縮后的文本:
- hdfs dfs -ls /user/hive/warehouse/student_text_snappy
- Found 3 items
- -rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000000_0.snappy
- -rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000001_0.snappy
- -rw-r--r-- 2015-09-15 16:42 /user/hive/warehouse/student_text_snappy/000002_0.snappy
SEQUENCEFILE
Sequence文件,DEFLATE壓縮
- --創建一個表,格式為文件文件:
- CREATE TABLE student_seq_def (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS SEQUENCEFILE;
- --設置壓縮類型為Gzip壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.DefaultCodec;
- --導入數據:
- INSERT OVERWRITE TABLE student_seq_def SELECT * FROM student;
- --查看數據
- SELECT * FROM student_seq_def;
查看數據文件,是一個密文的文件.
- hdfs dfs -ls /user/hive/warehouse/student_seq_def/
- -rw-r--r-- /user/hive/warehouse/student_seq_def/000000_0
Sequence文件,Gzip壓縮
- --創建一個表,格式為文件文件:
- CREATE TABLE student_seq_gzip (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS SEQUENCEFILE;
- --設置壓縮類型為Gzip壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
- --導入數據:
- INSERT OVERWRITE TABLE student_seq_gzip SELECT * FROM student;
- --查看數據
- SELECT * FROM student_seq_gzip;
查看數據文件,是一個密文的文件,無法通過gzip解壓:
- hdfs dfs -ls /user/hive/warehouse/student_seq_gzip/
- -rw-r--r-- /user/hive/warehouse/student_seq_gzip/000000_0
RCFILE
RCFILE,Gzip壓縮
- CREATE TABLE student_rcfile_gzip (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS RCFILE;
- --設置壓縮類型為Gzip壓縮
- SET hive.exec.compress.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
- --導入數據:
- INSERT OVERWRITE TABLE student_rcfile_gzip SELECT id,name FROM student;
- --查看數據
- SELECT * FROM student_rcfile_gzip;
ORCFile
ORCFile有自己的參數設置壓縮格式,一般不使用上述Hive參數設置壓縮參數。
ORCFile,ZLIB壓縮
- --創建表
- CREATE TABLE student_orcfile_zlib (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS ORCFILE TBLPROPERTIES ("orc.compress"="ZLIB");
- --導入數據
- INSERT OVERWRITE TABLE student_orcfile_zlib SELECT id,name FROM student;
- --查詢數據
- SELECT * FROM student_orcfile_zlib;
ORCFILE,Snappy壓縮
- --創建表
- CREATE TABLE student_orcfile_snappy2 (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS ORCFILE TBLPROPERTIES ("orc.compress"="SNAPPY");
- --導入數據
- INSERT OVERWRITE TABLE student_orcfile_snappy2 SELECT id,name FROM student;
- --查詢數據
- SELECT * FROM student_orcfile_snappy2;
一般不使用下述方式。下述方式壓縮后,結果與上述同類型壓縮(SNAPPY)不同。具體原因待進一步研究。
- --創建表
- CREATE TABLE student_orcfile_snappy (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS ORCFILE;
- --設置壓縮
- SET hive.exec.compress.output=true;
- SET mapred.compress.map.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
- SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;
- --導入數據
- INSERT OVERWRITE TABLE student_orcfile_snappy SELECT id,name FROM student;
- --查詢數據
- SELECT * FROM student_orcfile_snappy;
Parquet
Parquet,Snappy壓縮
- --創建表
- CREATE TABLE student_parquet_snappy (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS PARQUET;
- --設置壓縮
- SET hive.exec.compress.output=true;
- SET mapred.compress.map.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
- SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;
- --導入數據
- INSERT OVERWRITE TABLE student_parquet_snappy SELECT id,name FROM student;
- --查詢數據
- SELECT * FROM student_parquet_snappy;
Avro
Avro,Snappy壓縮
- --創建表
- CREATE TABLE student_avro_snappy (id STRING, name STRING)
- ROW FORMAT DELIMITED
- FIELDS TERMINATED BY ','
- LINES TERMINATED BY '\n'
- STORED AS AVRO;
- --設置壓縮
- SET hive.exec.compress.output=true;
- SET mapred.compress.map.output=true;
- SET mapred.output.compress=true;
- SET mapred.output.compression=org.apache.hadoop.io.compress.SnappyCodec;
- SET mapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec;
- SET io.compression.codecs=org.apache.hadoop.io.compress.SnappyCodec;
- --導入數據
- INSERT OVERWRITE TABLE student_avro_snappy SELECT id,name FROM student;
- --查詢數據
- SELECT * FROM student_avro_snappy;