hive执行count(*):Stage-1 map = 0%, reduce = 0%

news/2024/11/15 6:00:23/

1、问题描述:

在hive的shell端执行:select count(*) from student;出现了下面问题(一直卡着):

hive> select count(*) from student;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20190712235937_41a66280-b28b-4414-8a68-4e52e243bff3
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:set mapreduce.job.reduces=<number>
Starting Job = job_1562982531171_0001, Tracking URL = http://cmaster:8088/proxy/application_1562982531171_0001/
Kill Command = /opt/softWare/hadoop/hadoop-2.7.3/bin/hadoop job  -kill job_1562982531171_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-07-13 00:00:21,337 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:01:21,531 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:02:22,523 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:03:23,383 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:04:23,894 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:05:24,817 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:06:25,394 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:07:26,368 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:08:26,770 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:09:27,813 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:10:28,492 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:11:29,124 Stage-1 map = 0%,  reduce = 0%
......

2、解决办法:

(1)进入hadoop安装目录下,修改yarn-site.xml

cd  /opt/softWare/hadoop/hadoop-2.7.3/etc/hadoop

vim  yarn-site.xml

<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>

修改为:

<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>

(2)重启yarn服务:

cd /opt/softWare/hadoop/hadoop-2.7.3/sbin/[root@cmaster sbin]# ./stop-yarn.sh [root@cmaster sbin]# ./start-yarn.sh 

3、完美解决:

hive> select count(*) from student;
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = root_20190713002124_69d40918-a6b3-4755-ae76-e45e695cfeac
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:set mapreduce.job.reduces=<number>
Starting Job = job_1563002427699_0001, Tracking URL = http://cmaster:8088/proxy/application_1563002427699_0001/
Kill Command = /opt/softWare/hadoop/hadoop-2.7.3/bin/hadoop job  -kill job_1563002427699_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-07-13 00:21:53,647 Stage-1 map = 0%,  reduce = 0%
2019-07-13 00:22:14,111 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 2.67 sec
2019-07-13 00:22:33,107 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.67 sec
MapReduce Total cumulative CPU time: 5 seconds 670 msec
Ended Job = job_1563002427699_0001
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 5.67 sec   HDFS Read: 7660 HDFS Write: 101 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 670 msec
OK
3
Time taken: 70.166 seconds, Fetched: 1 row(s)

 


http://www.ppmy.cn/news/177258.html

相关文章

keil MDK编译完成后,Program Size: Code,RO-data,RW-data,ZI-data的意义

keil MDK编译完成后&#xff0c;Program Size: Code&#xff0c;RO-data&#xff0c;RW-data&#xff0c;ZI-data的意义 转载 2016-02-12 15:27:07 Program Size: Code344 RO-data252 RW-data100 ZI-data1024 看了多遍&#xff0c;总是忘&#xff0c;这次还是把它分析透了…

Centos搭建hadoop

2019独角兽企业重金招聘Python工程师标准>>> http://www.cnblogs.com/laov/p/3421479.html http://www.open-open.com/lib/view/open1435761287778.html http://jingyan.baidu.com/article/cd4c2979196e1d756e6e6093.html http://my.oschina.net/muou/blog/408543 h…

JS实现中文转拼音(首字母大写和首字母简拼)通用于mui、小程序等

一、将汉字翻译为拼音&#xff0c;其中每一个字的首字母大写&#xff1b; 使用方式&#xff1a; pinyin.getFullChars(this.value); 二、将每汉字的拼音首字母提取出来&#xff0c;是大写的形式。 pinyin.getCamelChars(this.value); 下面是实现的代码&#xff0c;getPing…

z390 黑苹果启动盘_黑苹果从入门到精通:K39小钢炮黑苹果实践

本内容来源于@什么值得买SMZDM.COM|作者:唐少游 本来从第三篇开始是基于一台没有任何参考的电脑,逐步逐步从零开始给大家展示黑苹果优化修复过程,可惜在第五篇结束后OC Formula主板损坏了,于是第六篇重新装了一台X99电脑,然而X99有非常详细现成的资料,导致黑苹果一步到位…

性能优化之 JVM 高级特性

在面试的后期&#xff0c;往往都会问性能优化的问题&#xff0c;譬如你优化过 JVM 吗&#xff0c;有没有遇到过 JVM 排查的场景&#xff0c;如果只能说点基本的见解&#xff0c;那面试官给你的定岗定薪很有可能是初级&#xff0c;为了避免这种尬聊&#xff0c;兄弟&#xff0c;…

nginx: [warn] duplicate extension xxxx解决方案

完整报错如下: nginx: [warn] duplicate extension “html”, content type: “text/html”, previous content type: “text/html” in /etc/nginx/mime.types:3 nginx: [warn] duplicate extension “htm”, content type: “text/html”, previous content type: “text/ht…

postgres_exporter redis_exporter pika_exporter+prometheus+grafana监控

postgres_exporter pika_exporter redis_exporterprometheusgrafana监控 安装postgres_exporter1) 通过二进制文件安装2) 通过docker安装 grafana中加入postgres_exporter监控数据安装redis_exportergrafana中加入redis_exporter监控数据安装pika_exportergrafana中加入pika_ex…

YII2的乐观锁和悲观锁

乐观锁与悲观锁[] Web应用往往面临多用户环境&#xff0c;这种情况下的并发写入控制&#xff0c; 几乎成为每个开发人员都必须掌握的一项技能。 在并发环境下&#xff0c;有可能会出现脏读&#xff08;Dirty Read&#xff09;、不可重复读&#xff08;Unrepeatable Read&…