欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Redis中的数据过期策略详解

程序员文章站 2022-06-24 23:29:58
1、redis中key的的过期时间 通过expire key seconds命令来设置数据的过期时间。返回1表明设置成功,返回0表明key不存在或者不能成功设置过期时间。...

1、redis中key的的过期时间

通过expire key seconds命令来设置数据的过期时间。返回1表明设置成功,返回0表明key不存在或者不能成功设置过期时间。在key上设置了过期时间后key将在指定的秒数后被自动删除。被指定了过期时间的key在redis中被称为是不稳定的。

当key被del命令删除或者被set、getset命令重置后与之关联的过期时间会被清除

127.0.0.1:6379> setex s 20 1
ok
127.0.0.1:6379> ttl s
(integer) 17
127.0.0.1:6379> setex s 200 1
ok
127.0.0.1:6379> ttl s
(integer) 195
127.0.0.1:6379> setrange s 3 100
(integer) 6
127.0.0.1:6379> ttl s
(integer) 152
127.0.0.1:6379> get s
"1\x00\x00100"
127.0.0.1:6379> ttl s
(integer) 108
127.0.0.1:6379> getset s 200
"1\x00\x00100"
127.0.0.1:6379> get s
"200"
127.0.0.1:6379> ttl s
(integer) -1

使用persist可以清除过期时间

127.0.0.1:6379> setex s 100 test
ok
127.0.0.1:6379> get s
"test"
127.0.0.1:6379> ttl s
(integer) 94
127.0.0.1:6379> type s
string
127.0.0.1:6379> strlen s
(integer) 4
127.0.0.1:6379> persist s
(integer) 1
127.0.0.1:6379> ttl s
(integer) -1
127.0.0.1:6379> get s
"test"

使用rename只是改了key值

127.0.0.1:6379> expire s 200
(integer) 1
127.0.0.1:6379> ttl s
(integer) 198
127.0.0.1:6379> rename s ss
ok
127.0.0.1:6379> ttl ss
(integer) 187
127.0.0.1:6379> type ss
string
127.0.0.1:6379> get ss
"test"

说明:redis2.6以后expire精度可以控制在0到1毫秒内,key的过期信息以绝对unix时间戳的形式存储(redis2.6之后以毫秒级别的精度存储),所以在多服务器同步的时候,一定要同步各个服务器的时间

2、redis过期键删除策略

redis key过期的方式有三种:

  1. 被动删除:当读/写一个已经过期的key时,会触发惰性删除策略,直接删除掉这个过期key
  2. 主动删除:由于惰性删除策略无法保证冷数据被及时删掉,所以redis会定期主动淘汰一批已过期的key
  3. 当前已用内存超过maxmemory限定时,触发主动清理策略

被动删除

只有key被操作时(如get),redis才会被动检查该key是否过期,如果过期则删除之并且返回nil。

1、这种删除策略对cpu是友好的,删除操作只有在不得不的情况下才会进行,不会其他的expire key上浪费无谓的cpu时间。

2、但是这种策略对内存不友好,一个key已经过期,但是在它被操作之前不会被删除,仍然占据内存空间。如果有大量的过期键存在但是又很少被访问到,那会造成大量的内存空间浪费。expireifneeded(redisdb *db, robj *key)函数位于src/db.c。

/*-----------------------------------------------------------------------------
 * expires api
 *----------------------------------------------------------------------------*/
 
int removeexpire(redisdb *db, robj *key) {
 /* an expire may only be removed if there is a corresponding entry in the
 * main dict. otherwise, the key will never be freed. */
 redisassertwithinfo(null,key,dictfind(db->dict,key->ptr) != null);
 return dictdelete(db->expires,key->ptr) == dict_ok;
}
 
void setexpire(redisdb *db, robj *key, long long when) {
 dictentry *kde, *de;
 
 /* reuse the sds from the main dict in the expire dict */
 kde = dictfind(db->dict,key->ptr);
 redisassertwithinfo(null,key,kde != null);
 de = dictreplaceraw(db->expires,dictgetkey(kde));
 dictsetsignedintegerval(de,when);
}
 
/* return the expire time of the specified key, or -1 if no expire
 * is associated with this key (i.e. the key is non volatile) */
long long getexpire(redisdb *db, robj *key) {
 dictentry *de;
 
 /* no expire? return asap */
 if (dictsize(db->expires) == 0 ||
 (de = dictfind(db->expires,key->ptr)) == null) return -1;
 
 /* the entry was found in the expire dict, this means it should also
 * be present in the main dict (safety check). */
 redisassertwithinfo(null,key,dictfind(db->dict,key->ptr) != null);
 return dictgetsignedintegerval(de);
}
 
/* propagate expires into slaves and the aof file.
 * when a key expires in the master, a del operation for this key is sent
 * to all the slaves and the aof file if enabled.
 *
 * this way the key expiry is centralized in one place, and since both
 * aof and the master->slave link guarantee operation ordering, everything
 * will be consistent even if we allow write operations against expiring
 * keys. */
void propagateexpire(redisdb *db, robj *key) {
 robj *argv[2];
 
 argv[0] = shared.del;
 argv[1] = key;
 incrrefcount(argv[0]);
 incrrefcount(argv[1]);
 
 if (server.aof_state != redis_aof_off)
 feedappendonlyfile(server.delcommand,db->id,argv,2);
 replicationfeedslaves(server.slaves,db->id,argv,2);
 
 decrrefcount(argv[0]);
 decrrefcount(argv[1]);
}
 
int expireifneeded(redisdb *db, robj *key) {
 mstime_t when = getexpire(db,key);
 mstime_t now;
 
 if (when < 0) return 0; /* no expire for this key */ /* don't expire anything while loading. it will be done later. */ if (server.loading) return 0; /* if we are in the context of a lua script, we claim that time is * blocked to when the lua script started. this way a key can expire * only the first time it is accessed and not in the middle of the * script execution, making propagation to slaves / aof consistent. * see issue #1525 on github for more information. */ now = server.lua_caller ? server.lua_time_start : mstime(); /* if we are running in the context of a slave, return asap: * the slave key expiration is controlled by the master that will * send us synthesized del operations for expired keys. * * still we try to return the right information to the caller, * that is, 0 if we think the key should be still valid, 1 if * we think the key is expired at this time. */ if (server.masterhost != null) return now > when;
 
 /* return when this key has not expired */
 if (now <= when) return 0; /* delete the key */ server.stat_expiredkeys++; propagateexpire(db,key); notifykeyspaceevent(redis_notify_expired, "expired",key,db->id);
 return dbdelete(db,key);
}
 
/*-----------------------------------------------------------------------------
 * expires commands
 *----------------------------------------------------------------------------*/
 
/* this is the generic command implementation for expire, pexpire, expireat
 * and pexpireat. because the commad second argument may be relative or absolute
 * the "basetime" argument is used to signal what the base time is (either 0
 * for *at variants of the command, or the current time for relative expires).
 *
 * unit is either unit_seconds or unit_milliseconds, and is only used for
 * the argv[2] parameter. the basetime is always specified in milliseconds. */
void expiregenericcommand(redisclient *c, long long basetime, int unit) {
 robj *key = c->argv[1], *param = c->argv[2];
 long long when; /* unix time in milliseconds when the key will expire. */
 
 if (getlonglongfromobjectorreply(c, param, &when, null) != redis_ok)
 return;
 
 if (unit == unit_seconds) when *= 1000;
 when += basetime;
 
 /* no key, return zero. */
 if (lookupkeyread(c->db,key) == null) {
 addreply(c,shared.czero);
 return;
 }
 
 /* expire with negative ttl, or expireat with a timestamp into the past
 * should never be executed as a del when load the aof or in the context
 * of a slave instance.
 *
 * instead we take the other branch of the if statement setting an expire
 * (possibly in the past) and wait for an explicit del from the master. */
 if (when <= mstime() && !server.loading && !server.masterhost) { robj *aux; redisassertwithinfo(c,key,dbdelete(c->db,key));
 server.dirty++;
 
 /* replicate/aof this as an explicit del. */
 aux = createstringobject("del",3);
 rewriteclientcommandvector(c,2,aux,key);
 decrrefcount(aux);
 signalmodifiedkey(c->db,key);
 notifykeyspaceevent(redis_notify_generic,"del",key,c->db->id);
 addreply(c, shared.cone);
 return;
 } else {
 setexpire(c->db,key,when);
 addreply(c,shared.cone);
 signalmodifiedkey(c->db,key);
 notifykeyspaceevent(redis_notify_generic,"expire",key,c->db->id);
 server.dirty++;
 return;
 }
}
 
void expirecommand(redisclient *c) {
 expiregenericcommand(c,mstime(),unit_seconds);
}
 
void expireatcommand(redisclient *c) {
 expiregenericcommand(c,0,unit_seconds);
}
 
void pexpirecommand(redisclient *c) {
 expiregenericcommand(c,mstime(),unit_milliseconds);
}
 
void pexpireatcommand(redisclient *c) {
 expiregenericcommand(c,0,unit_milliseconds);
}
 
void ttlgenericcommand(redisclient *c, int output_ms) {
 long long expire, ttl = -1;
 
 /* if the key does not exist at all, return -2 */
 if (lookupkeyread(c->db,c->argv[1]) == null) {
 addreplylonglong(c,-2);
 return;
 }
 /* the key exists. return -1 if it has no expire, or the actual
 * ttl value otherwise. */
 expire = getexpire(c->db,c->argv[1]);
 if (expire != -1) {
 ttl = expire-mstime();
 if (ttl < 0) ttl = 0; } if (ttl == -1) { addreplylonglong(c,-1); } else { addreplylonglong(c,output_ms ? ttl : ((ttl+500)/1000)); } } void ttlcommand(redisclient *c) { ttlgenericcommand(c, 0); } void pttlcommand(redisclient *c) { ttlgenericcommand(c, 1); } void persistcommand(redisclient *c) { dictentry *de; de = dictfind(c->db->dict,c->argv[1]->ptr);
 if (de == null) {
 addreply(c,shared.czero);
 } else {
 if (removeexpire(c->db,c->argv[1])) {
  addreply(c,shared.cone);
  server.dirty++;
 } else {
  addreply(c,shared.czero);
 }
 }
}

但仅是这样是不够的,因为可能存在一些key永远不会被再次访问到,这些设置了过期时间的key也是需要在过期后被删除的,我们甚至可以将这种情况看作是一种内存泄露----无用的垃圾数据占用了大量的内存,而服务器却不会自己去释放它们,这对于运行状态非常依赖于内存的redis服务器来说,肯定不是一个好消息

主动删除

先说一下时间事件,对于持续运行的服务器来说, 服务器需要定期对自身的资源和状态进行必要的检查和整理, 从而让服务器维持在一个健康稳定的状态, 这类操作被统称为常规操作(cron job)

在 redis 中, 常规操作由 redis.c/servercron 实现, 它主要执行以下操作

  • 更新服务器的各类统计信息,比如时间、内存占用、数据库占用情况等。
  • 清理数据库中的过期键值对。
  • 对不合理的数据库进行大小调整。
  • 关闭和清理连接失效的客户端。
  • 尝试进行 aof 或 rdb 持久化操作。
  • 如果服务器是主节点的话,对附属节点进行定期同步。
  • 如果处于集群模式的话,对集群进行定期同步和连接测试。

redis 将 servercron 作为时间事件来运行, 从而确保它每隔一段时间就会自动运行一次, 又因为 servercron 需要在 redis 服务器运行期间一直定期运行, 所以它是一个循环时间事件: servercron 会一直定期执行,直到服务器关闭为止。

在 redis 2.6 版本中, 程序规定 servercron 每秒运行 10 次, 平均每 100 毫秒运行一次。 从 redis 2.8 开始, 用户可以通过修改 hz选项来调整 servercron 的每秒执行次数, 具体信息请参考 redis.conf 文件中关于 hz 选项的说明

也叫定时删除,这里的“定期”指的是redis定期触发的清理策略,由位于src/redis.c的activeexpirecycle(void)函数来完成。

servercron是由redis的事件框架驱动的定位任务,这个定时任务中会调用activeexpirecycle函数,针对每个db在限制的时间redis_expirelookups_time_limit内迟可能多的删除过期key,之所以要限制时间是为了防止过长时间 的阻塞影响redis的正常运行。这种主动删除策略弥补了被动删除策略在内存上的不友好。

因此,redis会周期性的随机测试一批设置了过期时间的key并进行处理。测试到的已过期的key将被删除。

典型的方式为,redis每秒做10次如下的步骤:

  • 随机测试100个设置了过期时间的key
  • 删除所有发现的已过期的key
  • 若删除的key超过25个则重复步骤1

这是一个基于概率的简单算法,基本的假设是抽出的样本能够代表整个key空间,redis持续清理过期的数据直至将要过期的key的百分比降到了25%以下。这也意味着在任何给定的时刻已经过期但仍占据着内存空间的key的量最多为每秒的写操作量除以4.

redis-3.0.0中的默认值是10,代表每秒钟调用10次后台任务。

除了主动淘汰的频率外,redis对每次淘汰任务执行的最大时长也有一个限定,这样保证了每次主动淘汰不会过多阻塞应用请求,以下是这个限定计算公式:

#define active_expire_cycle_slow_time_perc 25 /* cpu max % for keys collection */ 
... 
timelimit = 1000000*active_expire_cycle_slow_time_perc/server.hz/100;

hz调大将会提高redis主动淘汰的频率,如果你的redis存储中包含很多冷数据占用内存过大的话,可以考虑将这个值调大,但redis作者建议这个值不要超过100。我们实际线上将这个值调大到100,观察到cpu会增加2%左右,但对冷数据的内存释放速度确实有明显的提高(通过观察keyspace个数和used_memory大小)。

可以看出timelimit和server.hz是一个倒数的关系,也就是说hz配置越大,timelimit就越小。换句话说是每秒钟期望的主动淘汰频率越高,则每次淘汰最长占用时间就越短。这里每秒钟的最长淘汰占用时间是固定的250ms(1000000*active_expire_cycle_slow_time_perc/100),而淘汰频率和每次淘汰的最长时间是通过hz参数控制的。

从以上的分析看,当redis中的过期key比率没有超过25%之前,提高hz可以明显提高扫描key的最小个数。假设hz为10,则一秒内最少扫描200个key(一秒调用10次*每次最少随机取出20个key),如果hz改为100,则一秒内最少扫描2000个key;另一方面,如果过期key比率超过25%,则扫描key的个数无上限,但是cpu时间每秒钟最多占用250ms。

当redis运行在主从模式时,只有主结点才会执行上述这两种过期删除策略,然后把删除操作”del key”同步到从结点。

maxmemory

当前已用内存超过maxmemory限定时,触发主动清理策略

  • volatile-lru:只对设置了过期时间的key进行lru(默认值)
  • allkeys-lru : 删除lru算法的key
  • volatile-random:随机删除即将过期key
  • allkeys-random:随机删除
  • volatile-ttl : 删除即将过期的
  • noeviction : 永不过期,返回错误当mem_used内存已经超过maxmemory的设定,对于所有的读写请求,都会触发redis.c/freememoryifneeded(void)函数以清理超出的内存。注意这个清理过程是阻塞的,直到清理出足够的内存空间。所以如果在达到maxmemory并且调用方还在不断写入的情况下,可能会反复触发主动清理策略,导致请求会有一定的延迟。

当mem_used内存已经超过maxmemory的设定,对于所有的读写请求,都会触发redis.c/freememoryifneeded(void)函数以清理超出的内存。注意这个清理过程是阻塞的,直到清理出足够的内存空间。所以如果在达到maxmemory并且调用方还在不断写入的情况下,可能会反复触发主动清理策略,导致请求会有一定的延迟。

清理时会根据用户配置的maxmemory-policy来做适当的清理(一般是lru或ttl),这里的lru或ttl策略并不是针对redis的所有key,而是以配置文件中的maxmemory-samples个key作为样本池进行抽样清理。

maxmemory-samples在redis-3.0.0中的默认配置为5,如果增加,会提高lru或ttl的精准度,redis作者测试的结果是当这个配置为10时已经非常接近全量lru的精准度了,并且增加maxmemory-samples会导致在主动清理时消耗更多的cpu时间,建议:

  • 尽量不要触发maxmemory,最好在mem_used内存占用达到maxmemory的一定比例后,需要考虑调大hz以加快淘汰,或者进行集群扩容。
  • 如果能够控制住内存,则可以不用修改maxmemory-samples配置;如果redis本身就作为lru cache服务(这种服务一般长时间处于maxmemory状态,由redis自动做lru淘汰),可以适当调大maxmemory-samples。

以下是上文中提到的配置参数的说明

# redis calls an internal function to perform many background tasks, like 
# closing connections of clients in timeout, purging expired keys that are 
# never requested, and so forth. 
# 
# not all tasks are performed with the same frequency, but redis checks for 
# tasks to perform according to the specified "hz" value. 
# 
# by default "hz" is set to 10. raising the value will use more cpu when 
# redis is idle, but at the same time will make redis more responsive when 
# there are many keys expiring at the same time, and timeouts may be 
# handled with more precision. 
# 
# the range is between 1 and 500, however a value over 100 is usually not 
# a good idea. most users should use the default of 10 and raise this up to 
# 100 only in environments where very low latency is required. 
hz 10 
 
# maxmemory policy: how redis will select what to remove when maxmemory 
# is reached. you can select among five behaviors: 
# 
# volatile-lru -> remove the key with an expire set using an lru algorithm 
# allkeys-lru -> remove any key according to the lru algorithm 
# volatile-random -> remove a random key with an expire set 
# allkeys-random -> remove a random key, any key 
# volatile-ttl -> remove the key with the nearest expire time (minor ttl) 
# noeviction -> don't expire at all, just return an error on write operations 
# 
# note: with any of the above policies, redis will return an error on write 
# operations, when there are no suitable keys for eviction. 
# 
# at the date of writing these commands are: set setnx setex append 
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd 
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby 
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby 
# getset mset msetnx exec sort 
# 
# the default is: 
# 
maxmemory-policy noeviction 
 
# lru and minimal ttl algorithms are not precise algorithms but approximated 
# algorithms (in order to save memory), so you can tune it for speed or 
# accuracy. for default redis will check five keys and pick the one that was 
# used less recently, you can change the sample size using the following 
# configuration directive. 
# 
# the default of 5 produces good enough results. 10 approximates very closely 
# true lru but costs a bit more cpu. 3 is very fast but not very accurate. 
# 
maxmemory-samples 5

replication link和aof文件中的过期处理

为了获得正确的行为而不至于导致一致性问题,当一个key过期时del操作将被记录在aof文件并传递到所有相关的slave。也即过期删除操作统一在master实例中进行并向下传递,而不是各salve各自掌控。这样一来便不会出现数据不一致的情形。当slave连接到master后并不能立即清理已过期的key(需要等待由master传递过来的del操作),slave仍需对数据集中的过期状态进行管理维护以便于在slave被提升为master会能像master一样独立的进行过期处理。

总结

以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作能带来一定的帮助,如果有疑问大家可以留言交流。