0
点赞
收藏
分享

微信扫一扫

【Hive】Hive表数据的导入导出

梅梅的时光 2022-03-11 阅读 83
hivehdfs

文章目录



一、Hive 中数据的导入

1、本地文件系统 导入 Hive 表

首先,在 Hive 中创建一个 cat_group 表,包含 group_idgroup_name 字段,字符类型为 string,以 ‘\t’ 为分隔符:

hive (db)> create table if not exists cat_group(group_id string ,group_name string)
         > row format delimited fields terminated by '\t'
         > stored as textfile;
OK
Time taken: 0.156 seconds

hive (db)> show tables;
OK
cat
cat3
cat_group
Time taken: 0.021 seconds, Fetched: 3 row(s)

[row format delimited]关键字,是用来设置创建的表在加载数据的时候,支持的列分隔符。
[stored as textfile]关键字,是用来设置加载数据的数据类型,默认是TEXTFILE,如果文件数据是纯文本,就是使用[stored as textfile],然后从本地直接拷贝到HDFS上,Hive直接可以识别数据。

linux 本地 /home/data/hive-data 目录下的 cat_group 文件导入到 Hive 中的 cat_group 表中:

hive (db)> load data local inpath '/../home/data/hive-data/cat_group' into table cat_group;
Loading data to table db.cat_group
OK
Time taken: 2.097 seconds

通过 select...from ... limit 语句查询前10条记录:

hive (db)> select * from cat_group limit 10;
OK
501	有机食品
502	蔬菜水果
503	肉禽蛋奶
504	深海水产
505	地方特产
506	进口食品
507	营养保健
508	休闲零食
509	酒水茶饮
510	粮油副食
Time taken: 0.162 seconds, Fetched: 10 row(s)

返回顶部


2、Hdfs 导入Hive

首先,在 hdfs 上创建 data/hive 目录:

[root@server hive-data]# hdfs dfs -mkdir -p /data/hive

在这里插入图片描述
然后将 cat_group 文件上传至 hive 目录下:

[root@server hive-data]# hdfs dfs -put /../home/data/hive-data/cat_group /data/hive
[root@server hive-data]# hdfs dfs -ls  /data/hive
Found 1 items
-rw-r--r--   3 root supergroup       2164 2022-03-06 11:07 /data/hive/cat_group

在 Hive 中创建 cat_group1 表:

hive (db)> create table if not exists cat_group1(group_id string ,group_name string)
         > row format delimited fields terminated by '\t'
         > stored as textfile;
OK
Time taken: 0.156 seconds

hive (db)> show tables;
OK
cat
cat3
cat_group
cat_group1
Time taken: 0.021 seconds, Fetched: 3 row(s)

hdfsdata/hive 目录下的 cat_group 文件数据导入到 cat_group1 表中:

// 提示:hdfs数据导入的时候不用加 local
hive (db)> load data inpath '/data/hive/cat_group' into table cat_group1;
Loading data to table db.cat_group1
OK
Time taken: 0.539 seconds
hive (db)> select * from cat_group1 limit 10;
OK
501	有机食品
502	蔬菜水果
503	肉禽蛋奶
504	深海水产
505	地方特产
506	进口食品
507	营养保健
508	休闲零食
509	酒水茶饮
510	粮油副食
Time taken: 0.107 seconds, Fetched: 10 row(s)

值得注意的是,此时存在 /data/hive 中的数据文件转移到了 /user/hive/warehouse/db.db/cat_group1 文件目录下了:
在这里插入图片描述

返回顶部


3、查询结果 导入 Hive

首先,在 Hive 中创建 cat_group2 表:

hive (db)> create table if not exists cat_group2(group_id string ,group_name string)
         > row format delimited fields terminated by '\t'
         > stored as textfile;
OK
Time taken: 0.156 seconds

hive (db)> show tables;
OK
cat
cat3
cat_group
cat_group1
cat_group2
Time taken: 0.016 seconds, Fetched: 3 row(s)

两种方式将 cat_group1 表中的数据导入到 cat_group2 表中:

// 直接导入
hive (db)> insert into table cat_group2 select * from cat_group1;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20220306111859_2bd20950-a787-4a76-8f2d-415dd3517c32
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1646527355398_0001, Tracking URL = http://server:8088/proxy/application_1646527355398_0001/
Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1646527355398_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2022-03-06 11:21:23,475 Stage-1 map = 0%,  reduce = 0%
2022-03-06 11:21:30,327 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.02 sec
MapReduce Total cumulative CPU time: 1 seconds 20 msec
Ended Job = job_1646527355398_0001
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://192.168.64.183:9000/user/hive/warehouse/db.db/cat_group2/.hive-staging_hive_2022-03-06_11-18-59_462_4910696310996519402-1/-ext-10000
Loading data to table db.cat_group2
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.02 sec   HDFS Read: 6128 HDFS Write: 1751 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 20 msec
OK
Time taken: 153.838 seconds
// 方式二:覆盖导入
hive (db)> insert overwrite  table cat_group2 select * from cat_group1;

最终的结果都是一样的,如下图:
在这里插入图片描述

返回顶部


4、创建表时将查询结果 导入 Hive

Hive 中创建表 cat_group3 并直接从表 cat_group2 中获取数据:

hive (db)> create table if not exists cat_group3 as select * from cat_group2;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20220306112908_50eaa6bf-0723-478e-bb8d-0d5101c23c01
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1646527355398_0003, Tracking URL = http://server:8088/proxy/application_1646527355398_0003/
Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1646527355398_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2022-03-06 11:30:40,422 Stage-1 map = 0%,  reduce = 0%
2022-03-06 11:30:59,723 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.04 sec
MapReduce Total cumulative CPU time: 1 seconds 40 msec
Ended Job = job_1646527355398_0003
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://192.168.64.183:9000/user/hive/warehouse/db.db/.hive-staging_hive_2022-03-06_11-29-08_568_2915460888753908036-1/-ext-10002
Moving data to directory hdfs://192.168.64.183:9000/user/hive/warehouse/db.db/cat_group3
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.04 sec   HDFS Read: 5334 HDFS Write: 1751 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 40 msec
OK
Time taken: 114.647 seconds
hive (db)> select * from cat_group3 limit 10;
OK
501	有机食品
502	蔬菜水果
503	肉禽蛋奶
504	深海水产
505	地方特产
506	进口食品
507	营养保健
508	休闲零食
509	酒水茶饮
510	粮油副食
Time taken: 0.335 seconds, Fetched: 10 row(s)

返回顶部


二、Hive 中数据的导出

1、导出到 本地文件系统

首先,在 linux 本地新建 /../home/data/hive-data/out 目录:

在这里插入图片描述
将表 cat_group 的数据导出至本地的 out 目录下:

hive (db)> insert overwrite local directory '/../home/data/hive-data/out' 
         > row format delimited fields terminated by '\t'
         > select * from cat_group;
FAILED: IllegalArgumentException Pathname /../home/data/hive-data/out/.hive-staging_hive_2022-03-06_11-49-38_025_4043006350015711521-1 from hdfs://192.168.64.183:9000/../home/data/hive-data/out/.hive-staging_hive_2022-03-06_11-49-38_025_4043006350015711521-1 is not a valid DFS filename.
hive (db)> insert overwrite local directory '/home/data/hive-data/out' 
         > row format delimited fields terminated by '\t'
         > select * from cat_group;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20220306115007_37878929-f5db-4e80-af09-0c1ca7b7c60d
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1646527355398_0004, Tracking URL = http://server:8088/proxy/application_1646527355398_0004/
Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1646527355398_0004
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2022-03-06 11:50:39,500 Stage-1 map = 0%,  reduce = 0%
2022-03-06 11:50:44,601 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
MapReduce Total cumulative CPU time: 1 seconds 30 msec
Ended Job = job_1646527355398_0004
Moving data to local directory /home/data/hive-data/out
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.03 sec   HDFS Read: 5671 HDFS Write: 1680 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 30 msec
OK
Time taken: 37.904 seconds

导出完成后,查看本地目录下文件的前10行内容:

[root@server out]# cat ./000000_0 |head -n  10
501	有机食品
502	蔬菜水果
503	肉禽蛋奶
504	深海水产
505	地方特产
506	进口食品
507	营养保健
508	休闲零食
509	酒水茶饮
510	粮油副食

返回顶部


2、导出到 Hdfs

hdfs 上创建 data/hive/out 目录:

[root@server out]# hdfs dfs -mkdir /data/hive/out
[root@server out]# hdfs dfs -ls /data/hive
Found 1 items
drwxr-xr-x   - root supergroup          0 2022-03-06 15:57 /data/hive/out

Hivecat_group 表中的数据导出到 hdfsout 目录下:

hive> insert overwrite directory '/data/hive/out' 
    > row format delimited fields terminated by '\t'
    > select group_id,group_name from cat_group;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20220306164548_6780ee23-d932-40fb-b7e7-55afe932bb33
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1646556082284_0001, Tracking URL = http://server:8088/proxy/application_1646556082284_0001/
Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1646556082284_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2022-03-06 16:46:54,121 Stage-1 map = 0%,  reduce = 0%
2022-03-06 16:47:00,370 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.03 sec
MapReduce Total cumulative CPU time: 1 seconds 30 msec
Ended Job = job_1646556082284_0001
Stage-3 is selected by condition resolver.
Stage-2 is filtered out by condition resolver.
Stage-4 is filtered out by condition resolver.
Moving data to directory hdfs://192.168.64.183:9000/data/hive/out/.hive-staging_hive_2022-03-06_16-45-48_785_6015608084561284378-1/-ext-10000
Moving data to directory /data/hive/out
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.03 sec   HDFS Read: 5561 HDFS Write: 1680 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 30 msec
OK
Time taken: 73.787 seconds

导出完成后,查看 hdfs 的文件:

[root@server ~]# hdfs dfs -ls /data/hive/out
Found 2 items
drwxr-xr-x   - root supergroup          0 2022-03-06 16:03 /data/hive/out/.hive-staging_hive_2022-03-06_16-03-19_927_4455543203588730803-1
-rwxr-xr-x   3 root supergroup       1680 2022-03-06 16:46 /data/hive/out/000000_0

[root@server ~]# hdfs dfs -cat /data/hive/out/000000_0 |head -n 10
501	有机食品
502	蔬菜水果
503	肉禽蛋奶
504	深海水产
505	地方特产
506	进口食品
507	营养保健
508	休闲零食
509	酒水茶饮
510	粮油副食

返回顶部


3、导出到 Hive表

将 Hive 中的 cat_group 表的数导入到 cat_group4中(两表字段及字符类型形同)。

首先,在Hive 中创建 cat_group4 表:

hive (db)> create table if not exists cat_group4(group_id string ,group_name string)
         > row format delimited fields terminated by '\t'
         > stored as textfile;
OK
Time taken: 0.156 seconds

hive (db)> show tables;
OK
cat
cat3
cat_group
cat_group1
cat_group2
cat_group3
cat_group4
Time taken: 0.016 seconds, Fetched: 3 row(s)

然后,将 cat_group 中的数据导出到 cat_group4 中:

hive> insert into table cat_group4 select * from cat_group;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20220306165034_e2a833b7-d761-4390-a417-4712380b338a
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1646556082284_0002, Tracking URL = http://server:8088/proxy/application_1646556082284_0002/
Kill Command = /usr/local/src/hadoop/bin/hadoop job  -kill job_1646556082284_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2022-03-06 16:51:28,079 Stage-1 map = 0%,  reduce = 0%
2022-03-06 16:51:38,227 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.09 sec
MapReduce Total cumulative CPU time: 1 seconds 90 msec
Ended Job = job_1646556082284_0002
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to directory hdfs://192.168.64.183:9000/user/hive/warehouse/db.db/cat_group4/.hive-staging_hive_2022-03-06_16-50-34_649_124292144843961059-1/-ext-10000
Loading data to table db.cat_group4
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.09 sec   HDFS Read: 6108 HDFS Write: 1751 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 90 msec
OK
Time taken: 65.267 seconds

hive> select * from cat_group4 limit 10;
OK
501	有机食品
502	蔬菜水果
503	肉禽蛋奶
504	深海水产
505	地方特产
506	进口食品
507	营养保健
508	休闲零食
509	酒水茶饮
510	粮油副食
Time taken: 0.132 seconds, Fetched: 10 row(s)

返回顶部


举报

相关推荐

0 条评论