0
点赞
收藏
分享

微信扫一扫

Hadoop2搭建可手工配置的HA

一葉_code 2022-09-12 阅读 90

-----------------------------

1.搭建手工切换的ha(比hadoop1集群搭建多了journalnode集群)

-----------------------------

namenode:haoop0和hadoop1

datanode:hadoop2、hadoop3、hadoop4

journalnode:haoop0、hadoop1、hadoop2(必须是奇数个节点)

resourcemanager:hadoop0

nodemanager:hadoop2、hadoop3、hadoop4



1.1 配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、slaves)

1.1.1 hadoop-env.sh

export JAVA_HOME=/usr/local/jdk

1.1.2 core-site.xml



<property>

<name>fs.defaultFS</name>

<value>hdfs://cluster1</value>

</property>



<property>

<name>hadoop.tmp.dir</name>

<value>/usr/local/hadoop/tmp</value>

</property>



1.1.3 hdfs-site.xml



<property>

<name>dfs.replication</name>

<value>2</value>

</property>



<property>

<name>dfs.nameservices</name>

<value>cluster1</value>

</property>



<property>

<name>dfs.ha.namenodes.cluster1</name>

<value>hadoop101,hadoop102</value>

</property>



<property>

<name>dfs.namenode.rpc-address.cluster1.hadoop101</name>

<value>hadoop0:9000</value>

</property>



<property>

<name>dfs.namenode.http-address.cluster1.hadoop101</name>

<value>hadoop0:50070</value>

</property>



<property>

<name>dfs.namenode.rpc-address.cluster1.hadoop102</name>

<value>hadoop1:9000</value>

</property>



<property>

<name>dfs.namenode.http-address.cluster1.hadoop102</name>

<value>hadoop1:50070</value>

</property>



<property>

<name>dfs.ha.automatic-failover.enabled.cluster1</name>

<value>false</value>

</property>



<property>

<name>dfs.namenode.shared.edits.dir</name>

<value>qjournal://hadoop0:8485;hadoop1:8485;hadoop2:8485/cluster1</value>

</property>



<property>

<name>dfs.journalnode.edits.dir</name>

<value>/usr/local/hadoop/tmp/journal</value>

</property>



<property>

<name>dfs.ha.fencing.methods</name>

<value>sshfence</value>

</property>



<property>

<name>dfs.ha.fencing.ssh.private-key-files</name>

<value>/root/.ssh/id_rsa</value>

</property>



<property>

<name>dfs.client.failover.proxy.provider.cluster1</name>

<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>



1.1.4 yarn-site.xml



<property>

<name>yarn.resourcemanager.hostname</name>

<value>hadoop0</value>

</property>



<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>



1.1.5 mapred-site.xml



<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>



1.1.6 slaves

hadoop2

hadoop3

hadoop4



1.1.7 把hadoop0上的hadoop文件夹复制到hadoop1、hadoop2、hadoop3、hadoop4节点

scp -rq hadoop hadoop1:/usr/local

scp -rq hadoop hadoop1:/usr/loca2

scp -rq hadoop hadoop1:/usr/loca3

scp -rq hadoop hadoop1:/usr/loca4





1.2 启动journalnode集群

在hadoop0、hadoop1、hadoop2上分别执行hadoop/sbin/hadoop-daemon.sh start journalnode

1.3 格式化namenode、启动namenode

在hadoop0上执行hadoop/bin/hdfs namenode -format

在hadoop0上分别执行hadoop/sbin/hadoop-daemon.sh start namenode

在hadoop1上执行hadoop/bin/hdfs namenode -bootstrapStandby

在hadoop1上分别执行hadoop/sbin/hadoop-daemon.sh start namenode

在hadoop0上执行hadoop/bin/hdfs haadmin -failover --forceactive hadoop101 hadoop102

(通过机制保证只有一个为active,另一个为standby,可以手动进行切换)

1.4 启动datanode

在hadoop0上分别执行hadoop/sbin/hadoop-daemons.sh start datanode(注意是hadoop-daemons.sh)

1.5 启动resourcemanager和nodemanager

在hadoop0上执行 hadoop/sbin/start-yarn.sh start resourcemanager
hadoop/sbin/start-yarn.sh start nodemanager
参考博客:http://www.superwu.cn/2014/02/12/1094/

举报

相关推荐

0 条评论