0
点赞
收藏
分享

微信扫一扫

执行MapReduce的Jar包报错:Exception in thread main ExitCodeException exitCode=1

执行MapReduce的Jar包报错:Exception in thread main ExitCodeException exitCode=1

  • ​​一、实验环境​​
  • ​​二、报错信息​​
  • ​​三、解决方法:​​
  • ​​四、另外一个问题​​
  • ​​五、参考资料:​​

一、实验环境

  • CentOS7.5
  • Hadoop3.1.3
  • 伪分布式模式
  • IDEA

二、报错信息

1.执行MapReduce的Jar包报错:Exception in thread main ExitCodeException exitCode=1 chmod 无法访问tmphadoopmapredstagingzhangsan1447658824.stagingjob_local1447658824_0001 没有那个文件或目录

执行MapReduce的Jar包报错:Exception in thread main ExitCodeException exitCode=1_大数据

文件已经确定是存在的

2.根据报错信息,推断是​​权限问题​​,将Hadoop安装目录下的tmp目录权限改为777

# 先进入到Hadoop安装目录
chmod 777 ./tmp

但是这个方法并没有解决该报错

3.查找网上,有提到关于HDFS默认的group是超级用户组supergroup,需要更改为当前用户。相关操作见三​​三、解决方法​

三、解决方法:

在Hadoop的Web界面可以看到group的权限是supergroup,​​将普通用户增加到HDFS的超级用户组supergroup​

1.步骤

先切换到root用户

  • 提前查看supergroup相关信息

[root@hadoop112 zhangsan]# group supergroup /etc/group

bash: group: 未找到命令...

  • 继续过滤查找

[root@hadoop112 zhangsan]#grep supergroup /etc/group

没有输出结果

  • 增加supergroup组

[root@hadoop112 zhangsan]#groupadd supergroup

  • 再次过滤查找

[root@hadoop112 zhangsan]#grep supergroup /etc/group

supergroup:x:1001:

  • 将当前用户增加到supergroup组

[root@hadoop112 zhangsan]# usermod -a -G supergroup zhangsan

  • 查看用户是否成功添加到supergroup组

[root@hadoop112 zhangsan]# id zhangsan

uid=1000(zhangsan) gid=1000(zhangsan) 组=1000(yzhangsanoona),1001(supergroup)

  • 刷新用户到组的映射信息

[root@hadoop112 zhangsan]# hdfs dfsadmin -refreshUserToGroupsMappings

Refresh user to groups mapping successful

  • 切换到普通用户

[root@hadoop112 zhangsan]# su zhangsan

[root@hadoop112 zhangsan]# hdfs dfsadmin -report

Configured Capacity: 38966558720 (36.29 GB) Present Capacity: 25293688832 (23.56 GB) DFS Remaining: 25293635584 (23.56 GB) DFS Used: 53248 (52 KB) DFS Used%: 0.00% Replicated Blocks: Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0 Erasure Coded Block Groups: Low redundancy block groups: 0 Block groups with corrupt internal blocks: 0 Missing block groups: 0 Low redundancy blocks with highest priority to recover: 0 Pending deletion blocks: 0

Live datanodes (1):

Name: 192.168.149.112:9866 (hadoop112) Hostname: hadoop112 Decommission Status : Normal Configured Capacity: 38966558720 (36.29 GB) DFS Used: 53248 (52 KB) Non DFS Used: 11669880832 (10.87 GB) DFS Remaining: 25293635584 (23.56 GB) DFS Used%: 0.00% DFS Remaining%: 64.91% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Thu May 12 23:07:35 CST 2022 Last Block Report: Thu May 12 22:59:14 CST 2022 Num of Blocks: 2

2.遗憾点

这个方法仍然没有解决问题

四、另外一个问题

命令行执行MapReduce的Jar包,MR可以成功运行并输出正确结果,但是执行过程中自动地将IDEA关闭了。

猜想:yarn资源不够用? 但是只是测试一个简单的MR样例,应该不至于吧?

记录一下这个错误

五、参考资料:

​​https://cloud.tencent.com/developer/article/1545624#:~:text=Hadoop%E6%9C%AC%E8%BA%AB,S%E7%94%A8%E6%88%B7%E5%92%8C%E7%BB%84%E5%8D%B3%E5%8F%AF%E3%80%82​​

举报

相关推荐

0 条评论