main : run as user is xwq main : requested yarn user is xwq User xwq not found

news/2024/11/23 23:21:22/

项目场景:

kerberos配置hadoop ha時,重新創建一個新的普通用戶,測試其功能`

问题描述

kerberos配置hadoop ha時,重新創建了一個用戶xwq,讓其執行hadoop jar /opt/ha/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 1 1命令,報錯如下:

[xwq@hadoop102 hive]$ hadoop jar /opt/ha/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
2023-06-21 10:13:36,763 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
Wrote input for Map #0
Starting Job
2023-06-21 10:13:37,121 INFO hdfs.DFSClient: Created token for xwq: HDFS_DELEGATION_TOKEN owner=xwq@EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1687313617117, maxDate=1687918417117, sequenceNumber=14, masterKeyId=12 on ha-hdfs:mycluster
2023-06-21 10:13:37,122 INFO security.TokenCache: Got dt for hdfs://mycluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:mycluster, Ident: (token for xwq: HDFS_DELEGATION_TOKEN owner=xwq@EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1687313617117, maxDate=1687918417117, sequenceNumber=14, masterKeyId=12)
2023-06-21 10:13:37,214 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/xwq/.staging/job_1687309231357_0007
2023-06-21 10:13:37,241 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:13:37,306 INFO input.FileInputFormat: Total input files to process : 1
2023-06-21 10:13:37,319 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:13:37,342 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:13:37,353 INFO mapreduce.JobSubmitter: number of splits:1
2023-06-21 10:13:37,448 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:13:37,462 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1687309231357_0007
2023-06-21 10:13:37,462 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:mycluster, Ident: (token for xwq: HDFS_DELEGATION_TOKEN owner=xwq@EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1687313617117, maxDate=1687918417117, sequenceNumber=14, masterKeyId=12)]
2023-06-21 10:13:37,615 INFO conf.Configuration: resource-types.xml not found
2023-06-21 10:13:37,615 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-06-21 10:13:37,867 INFO impl.YarnClientImpl: Submitted application application_1687309231357_0007
2023-06-21 10:13:37,901 INFO mapreduce.Job: The url to track the job: http://hadoop102:8088/proxy/application_1687309231357_0007/
2023-06-21 10:13:37,902 INFO mapreduce.Job: Running job: job_1687309231357_0007
2023-06-21 10:13:44,994 INFO mapreduce.Job: Job job_1687309231357_0007 running in uber mode : false
2023-06-21 10:13:44,995 INFO mapreduce.Job:  map 0% reduce 0%
2023-06-21 10:13:50,042 INFO mapreduce.Job:  map 100% reduce 0%
2023-06-21 10:13:53,058 INFO mapreduce.Job: Task Id : attempt_1687309231357_0007_r_000000_1000, Status : FAILED
[2023-06-21 10:13:51.502]Application application_1687309231357_0007 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is xwq
main : requested yarn user is xwq
User xwq not found

解决方案:

在每hadoop集群的每台節點上都需要創建xwq用戶,執行如下命令:

 useradd xwq
echo xwq | passwd --stdin xwq
usermod -a -G hadoop xwq
kadmin -p admin/admin -wNTVfPQY9kNs6 -q"addprinc -pw xwq xwq

然後再執行hadoop jar /opt/ha/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 1 1命令,成功:

[xwq@hadoop102 ~]$ hadoop jar /opt/ha/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar pi 1 1
Number of Maps  = 1
Samples per Map = 1
2023-06-21 10:27:07,443 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
Wrote input for Map #0
Starting Job
2023-06-21 10:27:07,754 INFO hdfs.DFSClient: Created token for xwq: HDFS_DELEGATION_TOKEN owner=xwq@EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1687314427749, maxDate=1687919227749, sequenceNumber=16, masterKeyId=12 on ha-hdfs:mycluster
2023-06-21 10:27:07,755 INFO security.TokenCache: Got dt for hdfs://mycluster; Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:mycluster, Ident: (token for xwq: HDFS_DELEGATION_TOKEN owner=xwq@EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1687314427749, maxDate=1687919227749, sequenceNumber=16, masterKeyId=12)
2023-06-21 10:27:07,838 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/xwq/.staging/job_1687309231357_0009
2023-06-21 10:27:07,859 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:27:07,925 INFO input.FileInputFormat: Total input files to process : 1
2023-06-21 10:27:07,939 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:27:07,962 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:27:07,974 INFO mapreduce.JobSubmitter: number of splits:1
2023-06-21 10:27:08,064 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2023-06-21 10:27:08,079 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1687309231357_0009
2023-06-21 10:27:08,080 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:mycluster, Ident: (token for xwq: HDFS_DELEGATION_TOKEN owner=xwq@EXAMPLE.COM, renewer=yarn, realUser=, issueDate=1687314427749, maxDate=1687919227749, sequenceNumber=16, masterKeyId=12)]
2023-06-21 10:27:08,222 INFO conf.Configuration: resource-types.xml not found
2023-06-21 10:27:08,222 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-06-21 10:27:08,474 INFO impl.YarnClientImpl: Submitted application application_1687309231357_0009
2023-06-21 10:27:08,508 INFO mapreduce.Job: The url to track the job: http://hadoop102:8088/proxy/application_1687309231357_0009/
2023-06-21 10:27:08,508 INFO mapreduce.Job: Running job: job_1687309231357_0009
2023-06-21 10:27:14,599 INFO mapreduce.Job: Job job_1687309231357_0009 running in uber mode : false
2023-06-21 10:27:14,600 INFO mapreduce.Job:  map 0% reduce 0%
2023-06-21 10:27:20,649 INFO mapreduce.Job:  map 100% reduce 0%
2023-06-21 10:27:24,667 INFO mapreduce.Job:  map 100% reduce 100%
2023-06-21 10:27:25,676 INFO mapreduce.Job: Job job_1687309231357_0009 completed successfully
2023-06-21 10:27:25,752 INFO mapreduce.Job: Counters: 53File System CountersFILE: Number of bytes read=28FILE: Number of bytes written=452853FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=258HDFS: Number of bytes written=215HDFS: Number of read operations=9HDFS: Number of large read operations=0HDFS: Number of write operations=3Job Counters Launched map tasks=1Launched reduce tasks=1Data-local map tasks=1Total time spent by all maps in occupied slots (ms)=3415Total time spent by all reduces in occupied slots (ms)=2005Total time spent by all map tasks (ms)=3415Total time spent by all reduce tasks (ms)=2005Total vcore-milliseconds taken by all map tasks=3415Total vcore-milliseconds taken by all reduce tasks=2005Total megabyte-milliseconds taken by all map tasks=3496960Total megabyte-milliseconds taken by all reduce tasks=2053120Map-Reduce FrameworkMap input records=1Map output records=2Map output bytes=18Map output materialized bytes=28Input split bytes=140Combine input records=0Combine output records=0Reduce input groups=2Reduce shuffle bytes=28Reduce input records=2Reduce output records=0Spilled Records=4Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=83CPU time spent (ms)=1110Physical memory (bytes) snapshot=641515520Virtual memory (bytes) snapshot=5220311040Total committed heap usage (bytes)=1142947840Peak Map Physical memory (bytes)=371208192Peak Map Virtual memory (bytes)=2606309376Peak Reduce Physical memory (bytes)=270307328Peak Reduce Virtual memory (bytes)=2614001664Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format Counters Bytes Read=118File Output Format Counters Bytes Written=97
Job Finished in 18.208 seconds
2023-06-21 10:27:25,809 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
Estimated value of Pi is 4.00000000000000000000

http://www.ppmy.cn/news/623363.html

相关文章

处理Springboot项目启动时streamBridge.send导致的卡住卡死问题

现象 我们的Spring Boot 项目启动时,偶现卡死的现象,启动到一半卡主不动了 2023-01-16 10:23:10.338 INFO 1 --- [ restartedMain] com.hazelcast.core.LifecycleService : [172.18.0.14]:5701 [dev] [4.2.4] [172.18.0.14]:5701 is STARTED 20…

互联网面试知识技术总结(javacore、jvm、redis、kafka、zookeeper、mysql、hystrix)-第四章...

<3>Dubbo的负载均衡策略 负载均衡的实现是在客户端实现的&#xff0c;当服务提供方是集群的时候&#xff0c;为了避免大量请求一直落到一个或几个服务提供方机器上&#xff0c;从而使这些机器负载很高&#xff0c;甚至打死&#xff0c;需要做一定的负载均衡策略。Dubbo提…

macOS Catalina 10.15.6(19G2006)原版镜像 by OpenCore-0.6.1-08-06编译版

Mac 的本领&#xff0c;突飞猛进。 音乐、播客&#xff0c;联袂登台 iTunes 曾深刻影响了人们的视听娱乐方式。如今&#xff0c;音乐和播客这两款全新 app 携手登场&#xff0c;让一切再次改变。每款 app 都彻彻底底重新设计&#xff0c;只为让你能在 Mac 上尽享娱乐的精彩。请…

AppArmor零知识学习二、相识

本文内容参考&#xff1a; AppArmor GitBook&#xff0c; Linux安全模块AppArmor总结-CSDN博客&#xff0c; AppArmor快速入门-CSDN博客&#xff0c; apparmor 初识&#xff08;一&#xff09;_domybest_nsg的博客-CSDN博客&#xff0c; AppArmor与SElinux_apparmor seli…

我的k8s随笔:Kubernetes 1.17.0 部署讲解

k8s集群部署过程实践笔记共两种版本&#xff1a;一为专注部署操作&#xff0c;一为涉及部署操作讲解。本文为后者。 本文介绍了如何在两台 ubuntu 16.04 64 bit 双核 CPU 虚拟机上使用 kubeadm 部署 Kubernetes 1.17.0 集群的过程&#xff0c;网络插件为 flannel v0.11.0&#…

csredis-in-asp.net core理论实战-使用示例

csredis GitHub https://github.com/2881099/csredis 示例源码 https://github.com/luoyunchong/dotnetcore-examples/blob/master/Caching/OvOv.CsRedis/ 前提 安装并配置好redis服务&#xff0c;可用。vs2017或vs2019或vscode.net core 2.2 sdk 创建一个. NET Core Web…

printf段错误(core dump): 一个格式化输出引起的问题

1、printf段错误&#xff08;core dump&#xff09;: 一个格式化输出引起的问题 贴一个简单的例子&#xff1a; #include <stdio.h>int main(){int len sizeof(int);printf("%s\n",len);return 0; }rootubuntu:test#gcc test.c test.c: In function ‘main’…

关于使用ESP报错问题解决记录

1.封装udp透传函数报错 … WiFi connected CORRUPT HEAP: Bad head at 0x3ffdfbe8. Expected 0xabba1234 got 0x3ffdfcb0 abort() was called at PC 0x4008a481 on core 0 ELF file SHA256: 0000000000000000 Backtrace: 0x4008dad4:0x3ffd5a40 0x4008dd4d:0x3ffd5a60 0x4008a4…