进入beeline客户端
beeline -u jdbc:hive2://hadoop102:10000 -n atguigu
行数据按照tab划分
row format delimited fields terminated by ‘\t’
导入数据进入
load data local inpath ‘/opt/module/hive/datas/test.txt’ into table test;
创建student表,并导入数据
create table if not exists student(
id int, name string
)
row format delimited fields terminated by ‘\t’;
load data local inpath ‘/opt/module/hive/datas/student.txt’ into table student;
查看详细表结构
desc formatted student;
修改表的属性参数(内部表改为外部表)
alter table student set tblproperties(‘EXTERNAL’=‘TRUE’);
反过来
alter table student set tblproperties(‘EXTERNAL’=‘FALSE’);
重命名表
ALTER TABLE table_name RENAME TO new_table_name
开启本地模式
set hive.exec.mode.local.auto=true;
设置reduce个数
set mapreduce.job.reduces=3;
flume三大组件的配置文件
添加内容如下:
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
开启flume的监控端口:
bin/flume-ng agent -c conf/ -n a1 -f job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console
如果sink是 Logger Sink,需要加的东西
-Dflume.root.logger=INFO,console
开启hive服务:
hiveservices.sh start
开启之后就会出现两个RunJar进程
开启集群(hadoop):
mycluster start
开启zookeeper服务:
myzookeeper start
检测当前开启的进程:
jpscall
开启flume的进程:
bin/flume-ng agent -c conf/ -n a1 -f job/flume-netcat-logger.conf