spark程序突然跑不起来,排查后发现原来是内存满了。(可以通过redis客户端的 info memory命令查看)
./redis-cli -h 127.0.0.1 -p 6379
info memory
网上查到的批量方案
./redis-cli -h 127.0.0.1 -p 6379 keys "mykeys*" | xargs ./redis-cli -h 127.0.0.1 -p 6379 del
运行后报错。好像在华为的FusionInsight HD集群这种方案行不通。
后来通过阅读源码,在ClusterUtil的类发现可以批量删除key的方法。
public void batchDelete(String pattern, int tryTimes)
{
if (tryTimes <= 0) {
throw new IllegalArgumentException("tryTimes must be greater than or equal to 0");
}
ScanParams scanRarams = new ScanParams().match(pattern).count(1000);
Set<JedisPool> pools = this.jedisCluster.getServingNodes();
CountDownLatch latch = new CountDownLatch(pools.size() * tryTimes);
try
{
for (int i = 0; i < tryTimes; i++) {
for (JedisPool jedisPool : pools) {
this.threadPool.submit(new DelRunnable(jedisPool, scanRarams, latch));
}
}
latch.await();
}
catch (InterruptedException e)
{
throw new JedisException(e);
}
}