1. 程式人生 > >排查sqoop報錯:Error running child : java.lang.OutOfMemoryError: Java heap space

排查sqoop報錯:Error running child : java.lang.OutOfMemoryError: Java heap space

date 行數 content sin mapper native reader exti 占用

報錯棧:

2017-06-16 19:50:51,002 INFO [main] org.apache.hadoop.mapred.MapTask: Processing split: 1=1 AND 1=1
2017-06-16 19:50:51,043 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Working on split: 1=1 AND 1=1
2017-06-16 19:50:51,095 INFO [main] org.apache.sqoop.mapreduce.db.DBRecordReader: Executing query: select
"EXTEND3","EXTEND2","EXTEND1","MEMO","OPER_DATE","OPER_CODE","FILE_CONTENT","FILE_NAME","INPATIENT_NO","ID" from HIS_SDZL."MDT_FILE" tbl where ( 1=1 ) AND ( 1=1 ) 2017-06-16 20:00:22,170 INFO [Thread-13] org.apache.sqoop.mapreduce.AutoProgressMapper: Auto-progress thread is finished. keepGoing=false 2017
-06-16 20:00:22,185 FATAL [main] org.apache.hadoop.mapred.YarnChild: Error running child : java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:137) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:
121) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:514) at java.lang.StringBuffer.append(StringBuffer.java:352) at java.util.regex.Matcher.appendReplacement(Matcher.java:888) at java.util.regex.Matcher.replaceAll(Matcher.java:955) at java.lang.String.replaceAll(String.java:2223) at QueryResult.readFields(QueryResult.java:205) at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:244) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145) at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

調小fetchsize參數也不能解決,那問題很可能是某行數據占用空間很大。根據Sqoop生成的導入表對應的實例化類QueryResult.java的244行可定位到報錯列是FILE_CONTENT,是個二進制列, 然後查詢原庫,果然最大的列長達到180M:

技術分享

ps: 怎麽用標準的sql語句查詢 blob字段的大小?
blob字段有好多種。如果是9i的簡單的blob字段則應該是 length,或者lengthb也可。實在不行可以用 dbms_lob.getlength()

排查sqoop報錯:Error running child : java.lang.OutOfMemoryError: Java heap space