发布于 3年前

hive运行异常: ERROR | main | Hive Runtime Error: Map local work exhausted memory

问题描述

hive执行sql包含join时候,提示异常: ERROR | main | Hive Runtime Error: Map local work exhausted memory

分析過程

1.异常日志下:

2019-06-24 13:39:41,706 | ERROR | main | Hive Runtime Error: Map local work exhausted memory | org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:400) 
org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionException: 2019-06-24 13:39:41        Processing rows:        1700000        Hashtable size:        1699999        Memory usage:        926540440        percentage:        0.914 
        at org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.checkMemoryStatus(MapJoinMemoryExhaustionHandler.java:99) 
        at org.apache.hadoop.hive.ql.exec.HashTableSinkOperator.process(HashTableSinkOperator.java:253) 
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838) 
        at org.apache.hadoop.hive.ql.exec.FilterOperator.process(FilterOperator.java:122) 
        at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:838) 
        at org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:132) 
        at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:455) 
        at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:426) 
        at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:392) 
        at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:830) 
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
        at java.lang.reflect.Method.invoke(Method.java:498) 
        at org.apache.hadoop.util.RunJar.run(RunJar.java:225) 
        at org.apache.hadoop.util.RunJar.main(RunJar.java:140) 

从日志看,在localtask出现了内存溢出。
2.由于开启了hive.auto.convert.join,但是实际小表大小是hive.mapjoin.smalltable.filesize(默认25M,小表不会超过25M)。由于使用的是orc压缩,解压缩后可能大小到了250M,存放到内存大小可能就会超过1G。
可以看到JVM Max Heap Size大小为:1013645312 (大约1G)

 2019-06-24 13:39:35,741 | INFO  | main | JVM Max Heap Size: 1013645312 | org.apache.hadoop.hive.ql.exec.mapjoin.MapJoinMemoryExhaustionHandler.<init>(MapJoinMemoryExhaustionHandler.java:61) 
2019-06-24 13:39:35,775 | INFO  | main | Key count from statistics is -1; setting map size to 100000 | org.apache.hadoop.hive.ql.exec.persistence.HashMapWrapper.calculateTableSize(HashMapWrapper.java:95) 
2019-06-24 13:39:35,776 | INFO  | main | Initialization Done 2 HASHTABLESINK done is reset. | org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:373) 

3.解决思路,调大localtask的内存。

一句话总结

由于使用了hive.auto.convert.join,对小表进行广播,但是原表是orc的,存放到内存可能膨胀到大于localtask的堆内存大小,导致sql执行失败。

解决措施

方案一

调大localtask的内存,set hive.mapred.local.mem=XX ,默认1G,调大到4G

方案二

直接关表autojoin,将hive.auto.convert.join设置成false

©2020 edoou.com   京ICP备16001874号-3