本来静谧的晚上,吃着葡萄干看着球赛,何等惬意。可偏偏一条报警短信如闪电一般打破了夜晚的宁静,线上集群一台RS宕了!于是倏地从床上坐起来,看了看监控,瞬间惊呆了:单台机器的读写吞吐量竟然达到了5w ops/sec!RS宕机是因为这么大的写入量造成的?如果真是这样,它是怎么造成的?如果不是这样,那又是什么原因?各种疑问瞬间从脑子里一一闪过,甭管那么多,先把日志备份一份,再把RS拉起来。接下来还是Bug排查老套路:日志、监控和源码三管齐下,来看看到底发生了什么!
案件现场篇
下图是使用监控工具Ganglia对事发RegionServer当时读写吞吐量的监控曲线,从图中可以看出,大约在19点~21点半的时间段内,这台RS的吞吐量都维持了3w ops/sec左右,峰值更是达到了6w ops/sec。之前我们就线上单台RS能够承受的最大读写吞吐量进行过测定,基本也就维持在2w左右,主要是因为网络带宽瓶颈。而在宕机前这台RS的读写吞吐量超出这么多,直觉告诉我RS宕机原因就是它!
接着就赶紧把日志拉出来看,满屏的responseTooSlow,如下图所示:
很显然,这种异常最大可能原因就是Full GC,果然,经过耐心地排查,可以看到很多如下所示的Full GC日志片段:
2016-04-14 21:27:13,174 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 20542ms GC pool 'ParNew' had collection(s): count=1 time=0ms GC pool 'ConcurrentMarkSweep' had collection(s): count=2 time=20898ms 2016-04-14 21:27:13,174 WARN [regionserver60020.periodicFlusher] util.Sleeper: We slept 20936ms instead of 100ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
可以看出,HBase执行了一次CMS GC,导致整个进程所有线程被挂起了20s。通过对MemStore的监控也可以看出这段时间GC力度之大,如下图所示:
GC时间长最明显的危害是会造成上层业务的阻塞,通过日志也可以看出些许端倪:
java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:384) at org.apache.hadoop.hbase.ipc.RpcServer.channelRead(RpcServer.java:2246) at org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess(RpcServer.java:1496) .... 2016-04-14 21:32:40,173 WARN [B.DefaultRpcServer.handler=125,queue=5,port=60020] ipc.RpcServer: RpcServer.respondercallId: 7540 service: ClientService methodName: Multi size: 100.2 K connection: 10.160.247.139:56031: output error 2016-04-14 21:32:40,173 WARN [B.DefaultRpcServer.handler=125,queue=5,port=60020] ipc.RpcServer: B.DefaultRpcServer.handler=125,queue=5,port=60020: caught a ClosedChannelException, this means that the server was processing a request but the client went away. The error message was: null
上述日志表示HBase服务端因为Full GC导致一直无法响应用户请求,用户客户端程序在一定时间过后就会SocketTimeout并断掉此Connection。连接断掉之后,服务器端就会打印如上日志。然而,这些和我们的终极目标好像并没有太大关系,别忘了我们的目标是找到RS宕机的原因哦! |
破案铺垫篇
经过对案件现场的排查,唯一有用的线索就是HBase在宕机前经历了很严重、很频繁的Full GC,从下面日志可以进一步看出,这些Full GC都是在 concurrent mode failure模式下发生的,也就是虚拟机还未执行完本次GC的情况下又来了大量数据导致JVM内存不够,此时虚拟机会将所有用户线程挂起,执行长时间的Full GC!
(concurrent mode failure): 45876255K->21800674K(46137344K), 10.0625300 secs] 48792749K->21800674K(49283072K), [CMS Perm : 43274K->43274K(262144K)], 10.2083040 secs] [Times: user=12.02 sys=0.00, real=10.20 secs] 2016-04-14 21:22:43,990 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 10055ms GC pool 'ParNew' had collection(s): count=2 time=244ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=10062ms
上文提到Full GC会对上层业务产生很严重的影响,那有没有可能会对下层依赖方也产生很大的影响呢?事实是Yes!而且,RS宕机的大部分原因也要归咎于此!
进一步查看日志,发现HBase日志中出现下述异常:
2016-04-14 21:22:44,006 WARN [ResponseProcessor for block BP-632656502-10.160.173.93-1448595094942:blk_1073941840_201226] hdfs.DFSClient: DFSOutputStream ResponseProcessor exception for block BP-632656502-10.160.173.93-1448595094942:blk_1073941840_201226
java.io.IOException: Bad response ERROR for block BP-632656502-10.160.173.93-1448595094942:blk_1073941840_201226 from datanode 10.160.173.93:50010 at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:732)
从日志内容来看应该是hbase调用DFSClient向datanode写入block数据”BP-632656502-10.160.173.93-1448595094942:blk_1073941840_201226″,但是datanode返回失败。具体失败原因需要查看datanode节点日志,如下所示:
2016-04-14 21:22:43,789 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-632656502-10.160.173.93-1448595094942:blk_1073941840_201226 received exception java.net.SocketTimeoutException: 10000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.173.94:50010 remote=/10.160.173.94:30110] 2016-04-14 21:22:43,779 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hz-hbase4.photo.163.org:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.160.173.94:30123 dest: /10.160.173.94:50010 java.net.SocketTimeoutException: 10000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.160.173.94:50010 remote=/10.160.173.94:30123] |
很显然,从日志可以看出,datanode一直在等待来自客户端的read请求,但是直至SocketTimeout,请求都没有过来,此时datanode会将该连接断开,导致客户端收到上述”Bad response ERROR ***”的异常。
那这和Full GC有什么关系呢?很简单,就是因为Full GC导致HBase所有内部线程挂起,因此发往datanode的read请求也被挂起了,datanode就等啊等,左等右等都等不到,万不得已才将连接断掉。
查看Hadoop客户端源码可知,如果DFSClient发生上述异常,DFSClient会将一个全局标志errorIndex设为一个非零值。具体可参见DFSOutputStream类中如下代码片段:
破案结局篇
上述铺垫篇最后的结果就是Hadoop客户端会将一个全局标志errorIndex设为一个非零值,那这到底和最终RS宕掉有什么关系呢?来继续往下看。下图HBase日志相关片段截图,记录了比较详细的RS宕机异常信息,我们就以这些异常信息作为切入点进行分析,可以看出至少三条有用的线索,如下图所示:
线索一:RS宕机最直接的原因是因为系统在关闭LogWriter(之后会重新开启一个新的HLog)的时候失败
线索二:执行LogWriter关闭失败的原因是”writing trailer”时发生IOException异常
线索三:而发生IOException异常的原因是”All datanodes *** are bad”
到这里为止,我们能够获得的最靠谱的情报就是RS宕机本质是因为”All datanodes *** are bad”造成的,看字面意思就是这台datanode因为某种原因坏掉了,那我们赶紧去看看datanode的日志,看看那个时间段有没有相关的异常或者错误日志。
然而很遗憾,datanode日志在那个时间点没有打印任何异常或者错误日志,而且显示所有服务都正常,信息如下所示:
2016-04-14 21:32:38,893 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1073941669, srvID: DS-22834907-10.160.173.94-50010-1448595406972, success: true 2016-04-14 21:32:38,894 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: 127.0.0.1, dest: 127.0.0.1, op: REQUEST_SHORT_CIRCUIT_FDS, blockid: 1073941669, srvID: DS-22834907-10.160.173.94-50010-1448595406972, success: true ...
看到这里,是不是有点蒙圈:HBase日志里面明明打印说这台datanode坏掉了,但是实际datanode日志显示服务一切正常。这个时候就得翻翻源码了,看看HBase在哪里打印的”All datanodes *** are bad“,通过查看源码,可以看出最终元凶就是上文提到的errorIndex,如下图所示:
终于拨开天日了,再不完结就要晕了!上文铺垫篇铺垫到最后就得出来因为Full GC最终导致DFSClient将一个全局标志errorIndex设为一个非零值,在这里终于碰头了,简直泪流满面!
案件梳理篇
整个流程走下来本人都有点晕晕的,涉及的方方面面太多,因此有必要把整个流程完整的梳理一遍,下面简单画了一个示意图:
经过对整个案件的整理分析,一方面再次锻炼了如何通过监控、日志以及源码定位排查问题的功底,另一方面在HBase运维过程中也需要特别关注如下几点:
1. Full GC不仅会严重影响上层业务,造成业务读写请求的卡顿。另外还有可能造成与HDFS之间数据请求的各种异常,这种异常严重的时候甚至会导致RegionServer宕机。
2. 上文中提到Full GC基本是由于Concurrent Mode Failure造成,这种Full GC场景比较少见,通常可以通过减小 JVM 参数XX:CMSInitiatingOccupancyFraction来避免,这个参数用来设置CMS垃圾回收时机,假如此时设置为60,表示JVM内已使用内存占到总内存的60%的时候就会进行垃圾回收,减少该值可以使得垃圾回收更早进行。
3. 一定要严格限制业务层面的流量。一方面需要和业务方交流,由业务方进行限制,另一方面可以探索HBase业务资源隔离的新方案;
博主写的很仔细,条理也很清晰 学生学习了~
在此问个问题, 将文件依据文件名(UUID+TIMESTAMP)为RowKey,内容为Values,存储到Hbase中,这个方案靠谱么
文件大小一般为几十KB,文件量大概50个/s,可能会有将几天的文件提取出来至磁盘的任务.
谢谢了!
理论上是可以 但是如果按照你的说法是读取几天的文件就需要使用类似于过滤器的scan扫描 效率可能会比较低 建议使用Mongodb 将时间设置为二级索引 这样可以满足你这样的读取场景
We stumbled over here by a difnereft page and thought I should check things out. I like what I see so now i am following you. Look forward to finding out about your web page yet again.
我们在统一的集群也有遇到这种问题,有时候连续几个大scan就把集群干挂了。目前把社区的rsgroup 移植到我们自己的内部版本,进行资源隔离。不过,还是没有办法作道资源限制。
rsgroup目前还是实验性质的吧,你们已经上到线上了?
我们移植到自己的版本进行压力测试和性能测试之后,觉得没问题就上了。有个bug:https://issues.apache.org/jira/browse/HBASE-16222
赞赞赞 小米HBaser就是牛 rsgroup只能做到资源隔离吧 资源限制方面好像大阿里做了限制大scan的功能 社区好像在资源限制方面动作还不大?
限制大的scan的功能是怎么实现,目前我是自己改写了客户端,只是简单的限制了scan的必须输入scan的startRow和endStore!有资料吗?
之前看他们的一个PPT说的 没有详细的方案哎
博主你好,我们也遇到过同样的问题,hbase GC这块你们有尝试过G1算法吗?
G1算法没有尝试过 目测适用于100g+以上的内存;目前可以调优CMS参数+blockcache offheap缓解,也可以使用一个物理机上起多个RegionServer的方式缓解(不过监控好像不好搞)~
我的hbase集群的多个regionserver比较频繁的遇到这样一个情况,就是jvm停止工作,但是当时并没有GC发生。
日志如下:
2017-05-26 18:42:10,607 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 48721ms
No GCs detected
2017-05-26 19:12:10,790 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 49500ms
No GCs detected
通过visualGC观察,这段时间确实没有gc发生。但整个jvm都停止了,从visualgc右边的第一行compile和classloader都是停止的。很奇怪,还没有找到原因。
求指教,非常感谢!
目前这类长时间虚拟机暂停都当GC处理的(具体原因本人不确定,如果需要探讨,可以加本人微信libisthanks,可以拉群交流) 你们rs总jvm内存设置多大?有没有开启bucketcache的offheap模式?
感谢回复~!
我们的hbase用的是1.1.4版本。
1. 开启了bucketcache的offheap模式。
hbase.bucketcache.ioengine
offheap
hbase.bucketcache.size
14336
2. rs的Jvm内存18G,参数配置如下:
HBASE_REGIONSERVER_OPTS=”$HBASE_REGIONSERVER_OPTS -XX:SurvivorRatio=4 -server -Xmx18g -Xms18g -Xmn1g -Xss256k -XX:PermSize=256m -XX:MaxPermSize=256m -XX:+UseParNewGC -XX:MaxTenuringThreshold=10 -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:+CMSClassUnloadingEnabled -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:-DisableExplicitGC”
3. 经过了一些参数调优,No GC 的Jvm停顿时间有所降低,但还是会发生。
以下是一个例子(2017-06-05 19:36):
2017-06-05 19:36:40,932 WARN [JvmPauseMonitor] util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 10030ms
No GCs detected
当时段的gc日志如下:
2017-06-05T19:36:03.369+0800: 797616.040: Total time for which application threads were stopped: 0.0137030 seconds
2017-06-05T19:36:22.091+0800: 797634.762: [GC2017-06-05T19:36:22.091+0800: 797634.762: [ParNew
Desired survivor size 89456640 bytes, new threshold 10 (max 10)
– age 1: 1033520 bytes, 1033520 total
– age 2: 236264 bytes, 1269784 total
– age 3: 211400 bytes, 1481184 total
– age 4: 260000 bytes, 1741184 total
– age 5: 173448 bytes, 1914632 total
– age 6: 159216 bytes, 2073848 total
– age 7: 173896 bytes, 2247744 total
– age 8: 179192 bytes, 2426936 total
– age 9: 182104 bytes, 2609040 total
– age 10: 191624 bytes, 2800664 total
: 702804K->3678K(873856K), 0.0122400 secs] 17881044K->17182126K(18699648K), 0.0123650 secs] [Times: user=0.21 sys=0.00, real=0.01 secs]
2017-06-05T19:36:22.104+0800: 797634.774: Total time for which application threads were stopped: 0.0134570 seconds
2017-06-05T19:36:23.323+0800: 797635.993: Total time for which application threads were stopped: 0.0010760 seconds
2017-06-05T19:36:23.323+0800: 797635.994: Total time for which application threads were stopped: 0.0002370 seconds
2017-06-05T19:36:40.931+0800: 797653.602: Total time for which application threads were stopped: 10.2617260 seconds
2017-06-05T19:36:40.931+0800: 797653.602: Total time for which application threads were stopped: 0.0002360 seconds
2017-06-05T19:36:40.932+0800: 797653.602: Total time for which application threads were stopped: 0.0002140 seconds
2017-06-05T19:36:40.932+0800: 797653.603: Total time for which application threads were stopped: 0.0002160 seconds
2017-06-05T19:36:40.934+0800: 797653.604: Total time for which application threads were stopped: 0.0003790 seconds
2017-06-05T19:36:40.934+0800: 797653.605: Total time for which application threads were stopped: 0.0002830 seconds
2017-06-05T19:36:40.935+0800: 797653.605: Total time for which application threads were stopped: 0.0001910 seconds
2017-06-05T19:36:40.935+0800: 797653.606: Total time for which application threads were stopped: 0.0001910 seconds
2017-06-05T19:36:40.935+0800: 797653.606: Total time for which application threads were stopped: 0.0001820 seconds
2017-06-05T19:36:40.951+0800: 797653.622: Total time for which application threads were stopped: 0.0003420 seconds
2017-06-05T19:36:40.952+0800: 797653.622: Total time for which application threads were stopped: 0.0001750 seconds
2017-06-05T19:36:40.952+0800: 797653.622: Total time for which application threads were stopped: 0.0001520 seconds
2017-06-05T19:36:40.952+0800: 797653.623: Total time for which application threads were stopped: 0.0001410 seconds
2017-06-05T19:36:45.881+0800: 797658.552: [GC2017-06-05T19:36:45.881+0800: 797658.552: [ParNew
Desired survivor size 89456640 bytes, new threshold 10 (max 10)
– age 1: 1593696 bytes, 1593696 total
– age 2: 244032 bytes, 1837728 total
– age 3: 210744 bytes, 2048472 total
– age 4: 211184 bytes, 2259656 total
– age 5: 193208 bytes, 2452864 total
– age 6: 173448 bytes, 2626312 total
– age 7: 159168 bytes, 2785480 total
– age 8: 173896 bytes, 2959376 total
– age 9: 179168 bytes, 3138544 total
– age 10: 182152 bytes, 3320696 total
: 702799K->4120K(873856K), 0.0133430 secs] 17881247K->17182824K(18699648K), 0.0134700 secs] [Times: user=0.22 sys=0.00, real=0.02 secs]
2017-06-05T19:36:45.895+0800: 797658.565: Total time for which application threads were stopped: 0.0145820 seconds
2017-06-05T19:36:53.935+0800: 797666.606: Total time for which application threads were stopped: 0.0010200 seconds
之前碰到过类似问题,不过不只是regionserver,其它进程也会假死,最终查到是centos某个版本的问题,改用更高或更低版本都可解决;
ps:用的cdh
能详细说明一下centos版本和具体问题嘛 很感兴趣…
“从日志内容来看应该是hbase调用DFSClient从datanode加载block数据”BP-632656502-10.160.173.93-1448595094942:blk_1073941840_201226″,但是datanode返回失败。“
从该段文字前后贴出的日志来看,貌似是hbase写入datanode,而不是读取。
这块看了一下 确实有纰漏 谢谢指正 已修改
看得我都快要哭了。分析得实在太棒了,解决了我们的问题!感谢作者!!
备感荣幸