0
点赞
收藏
分享

微信扫一扫

hue安装使用,集成数据库



Hue是一个开源的Apache Hadoop UI系统,由Cloudera Desktop演化而来,最后Cloudera公司将其贡献给Apache基金会的Hadoop社区,它是基于Python Web框架Django实现的。

通过使用Hue我们可以在浏览器端的Web控制台上与Hadoop集群进行交互来分析处理数据,例如操作HDFS上的数据,运行MapReduce Job,执行Hive的SQL语句,浏览HBase数据库等等。

hue特点:

  • 能够支持各个版本的hadoop
  • hue默认数据库:sql lite
  • 文件浏览器:对数据进行增删改查
  • hue下载src包进行一次编译,二次编译,在这用的是已经一次编译

使用mysql作为hue的元数据库

hue默认使用sqlite作为原数据库,但是经常会出现database is locked错误。所以这里替换成mysql作为元数据库。

先在mysql里面创建数据库hue

create database hue default character set utf8 default collate utf8_general_ci;
grant all on hue.* to 'hue'@'%' identified by 'hue';
select * from information_schema.schemata;

修改hue.ini

[[database]] 

  engine=mysql
  host=slave1
  port=3306
  user=hadoop
  password=hadoop
  name=hue

初始化数据库

cd hue/build/env/
bin/hue syncdb
bin/hue migrate

执行完以后,可以在mysql中看到,hue相应的表已经生成。

启动hue, 能够正常访问了

hue和hadoop的组件配置(hdfs,yarn)

1、hdfs的配置

在hdfs-site.xml中配置

<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

在core-site.xml中配置(主要是加入,你在hadoop中使用的用户,我这里使用的root用户,启动的hdfs,所以这里加入了root用户名)

<!-- 设置Hadoop集群的代理用户 -->
<property>
  <name>hadoop.proxyuser.hue.hosts</name>
  <value>*</value>
</property>
<!-- 设置Hadoop集群的代理用户组 -->
<property>
  <name>hadoop.proxyuser.hue.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hadoop.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hdfs.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.hdfs.groups</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.hosts</name>
  <value>*</value>
</property>
<property>
  <name>hadoop.proxyuser.root.groups</name>
  <value>*</value>
</property>

启动或重启hdfs

并且需要你的hdfs的namenode和datanode启动httpfs

hue添加配置(hue.ini和hue-pseudo-distributed.ini)

# hue的用户
server_user	root
# hue的用户组
server_group	root
# hue的默认用户
default_user	root
# hdfs的代理用户
default_hdfs_superuser	root


[[hdfs_clusters]]
# HA support by using HttpFs

[[[default]]]
# Enter the filesystem uri
fs_defaultfs=hdfs://hadoop01.xningge.com:8020

# NameNode logical name.
## logical_name=

# Use WebHdfs/HttpFs as the communication mechanism.
# Domain should be the NameNode or HttpFs host.
# Default port is 14000 for HttpFs.
webhdfs_url=http://hadoop01.xningge.com:50070/webhdfs/v1

# This is the home of your Hadoop HDFS installation
hadoop_hdfs_home=/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6

# Use this as the HDFS Hadoop launcher script
hadoop_bin=/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/bin

# Change this if your HDFS cluster is Kerberos-secured
## security_enabled=false

# Default umask for file and directory creation, specified in an octal value.
## umask=022

# Directory of the Hadoop configuration
hadoop_conf_dir=/opt/modules/cdh/hadoop-2.5.0-cdh5.3.6/etc/hadoop

[[yarn_clusters]]

[[[default]]]
# Enter the host on which you are running the ResourceManager
resourcemanager_host=hadoop01.xningge.com

# The port where the ResourceManager IPC listens on
resourcemanager_port=8032

# Whether to submit jobs to this cluster
submit_to=True

# Resource Manager logical name (required for HA)
## logical_name=

# Change this if your YARN cluster is Kerberos-secured
## security_enabled=false

# URL of the ResourceManager API
resourcemanager_api_url=http://hadoop01.xningge.com:8088

# URL of the ProxyServer API
proxy_api_url=http://hadoop01.xningge.com:8088

# URL of the HistoryServer API
history_server_api_url=http://hadoop01.xningge.com:19888

启动hue后,在用户管理中,添加你在hdfs或hive中使用的用户,并且以该用户进行登录,才能看到hdfs或者hive中的文件和job。

hue与hive配置

部署启动hive服务

bin/hiveserver2 &
bin/hive --service hiveserver2 &
bin/hive --service metastore &

不启动的话hue页面会报错:Could not connect to localhost:10000 或者 Could not connect to bigdatamaster:10000

hue配置(hue.ini和hue-pseudo-distributed.ini)

[beeswax]

hive_server_host=hive-service-name
hive_server_port=10000

# Hive configuration directory, where hive-site.xml is located
hive_conf_dir=/opt/modules/cdh/hive-0.13.1-cdh5.3.6/conf

# Timeout in seconds for thrift calls to Hive service
server_conn_timeout=120

# Choose whether Hue uses the GetLog() thrift call to retrieve Hive logs.
# If false, Hue will use the FetchResults() thrift call instead.
## use_get_log_api=true

# Set a LIMIT clause when browsing a partitioned table.
# A positive value will be set as the LIMIT. If 0 or negative, do not set any limit.
## browse_partitioned_table_limit=250

# A limit to the number of rows that can be downloaded from a query.
# A value of -1 means there will be no limit.
# A maximum of 65,000 is applied to XLS downloads.
## download_row_limit=1000000

# Hue will try to close the Hive query when the user leaves the editor page.
# This will free all the query resources in HiveServer2, but also make its results inaccessible.
## close_queries=false

# Thrift version to use when communicating with HiveServer2
## thrift_version=5

hue 和 impala集成

安装启动impala

修改hue配置(hue.ini和hue-pseudo-distributed.ini)

[impala]
  server_host=hadoop-impala-server-catalog-state-store.cloudai-2.com
  server_port=21050

hue与关系型数据库配置

参考:http://gethue.com/custom-sql-query-editors/

[librdbms]
# The RDBMS app can have any number of databases configured in the databases
# section. A database is known by its section name
# (IE sqlite, mysql, psql, and oracle in the list below).

[[databases]]
# sqlite configuration.
[[[sqlite]]] //注意这里一定要取消注释
# Name to show in the UI.
nice_name=SQLite

# For SQLite, name defines the path to the database.
name=/opt/modules/hue-3.7.0-cdh5.3.6/desktop/desktop.db

# Database backend to use.
engine=sqlite

# Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}

# mysql, oracle, or postgresql configuration.

  ##注意:这里的数据不能改动,默认是hue的数据库

[[[mysql]]] //注意这里一定要取消注释
# Name to show in the UI.
nice_name="My SQL DB"

# For MySQL and PostgreSQL, name is the name of the database.
# For Oracle, Name is instance of the Oracle server. For express edition
# this is 'xe' by default.
name=sqoop//这个name是数据库表名

# Database backend to use. This can be:
# 1. mysql
# 2. postgresql
# 3. oracle
engine=mysql

# IP or hostname of the database to connect to.
host=hadoop01.xningge.com

# Port the database server is listening to. Defaults are:
# 1. MySQL: 3306
# 2. PostgreSQL: 5432
# 3. Oracle Express Edition: 1521
port=3306

# Username to authenticate with when connecting to the database.
user=xningge

# Password matching the username to authenticate with when
# connecting to the database.
password=???

# Database options to send to the server when connecting.
# https://docs.djangoproject.com/en/1.4/ref/databases/
## options={}

hue与zookeeper配置

只需修改hue.ini文件

host_ports=hadoop01.xningge.com:2181

启动zookeeper:

hue与oozie的配置

修改:hue.ini文件

[liboozie]
oozie_url=http://hadoop01.xningge.com:11000/oozie

如果没有出来的:
修改:oozie-site.xml

<property>
    <name>oozie.service.WorkflowAppService.system.libpath</name>
    <value>/user/oozie/share/lib</value>
  </property>

到oozie目录下重新创建sharelib库:

bin/oozie-setup.sh sharelib create -fs hdfs://hadoop01.xningge.com:8020 -locallib oozie-sharelib-4.0.0-cdh5.3.6-yarn.tar.gz

启动oozie:bin/oozied.sh start

hue与hbase的配置

修改hue.ini文件:

hbase_clusters=(Cluster|hadoop01.xningge.com:9090)
hbase_conf_dir=/opt/cdh_5.3.6/hbase-0.98.6-cdh5.3.6/conf

修改hbase-site.xml,添加以下配置:

<property>
  <name>hbase.regionserver.thrift.http</name>
  <value>true</value>
</property>
<property>
  <name>hbase.thrift.support.proxyuser</name>
  <value>true</value>
</property>

启动hbase:

bin/start-hbase.sh
bin/hbase-daemon.sh start thrift

##hbase完全分布式

hbase_clusters=(Cluster1|hostname:9090,Cluster2|hostname:9090,Cluster3|hostname:9090)


举报

相关推荐

0 条评论