1. Redis高并发流量支撑概述

Redis作为高性能的内存数据库,在支撑高并发流量方面具有显著优势。其单线程事件驱动架构、内存存储特性以及丰富的数据结构,使其能够轻松处理数万到数十万的并发请求。本文将详细介绍Redis在高并发场景下的优化策略、部署方案以及最佳实践。

1.1 Redis高并发核心优势

  1. 内存存储: 数据存储在内存中,访问速度极快
  2. 单线程模型: 避免线程切换开销,简化并发控制
  3. 事件驱动: 基于epoll的异步I/O模型
  4. 数据结构丰富: 支持多种高效数据结构
  5. 持久化机制: 提供RDB和AOF两种持久化方式
  6. 集群支持: 支持主从复制和集群模式

1.2 Redis高并发架构特点

  • 单线程: 主线程处理所有命令,避免锁竞争
  • 内存操作: 所有数据操作都在内存中进行
  • 网络I/O: 使用epoll实现高效的网络I/O
  • 数据结构: 针对不同场景优化的数据结构
  • 持久化: 异步持久化,不影响主线程性能
  • 集群: 支持水平扩展和故障转移

1.3 Redis高并发性能指标

  • QPS: 每秒查询数,可达10万+
  • 延迟: 微秒级响应时间
  • 并发连接: 支持数万并发连接
  • 内存效率: 高效的内存使用
  • CPU效率: 单线程充分利用CPU
  • 网络效率: 高效的网络I/O处理

2. Redis高并发配置优化

2.1 Redis主配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
# Redis主配置文件 - 高并发优化
# /etc/redis/redis.conf

# 网络配置
bind 0.0.0.0
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300

# 通用配置
daemonize yes
pidfile /var/run/redis/redis-server.pid
loglevel notice
logfile /var/log/redis/redis-server.log
databases 16

# 内存配置
maxmemory 8gb
maxmemory-policy allkeys-lru
maxmemory-samples 5

# 持久化配置
# RDB配置
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /var/lib/redis

# AOF配置
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes

# 客户端配置
maxclients 10000
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60

# 安全配置
requirepass your_redis_password
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command KEYS ""
rename-command CONFIG ""

# 慢查询配置
slowlog-log-slower-than 10000
slowlog-max-len 128

# 延迟监控
latency-monitor-threshold 100

# 高级配置
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes

2.2 Redis集群配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Redis集群配置文件
# /etc/redis/cluster.conf

# 集群配置
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
cluster-announce-ip 192.168.1.100
cluster-announce-port 6379
cluster-announce-bus-port 16379

# 集群节点配置
cluster-require-full-coverage yes
cluster-allow-reads-when-down no
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-allow-replica-migration yes

# 集群优化配置
cluster-slave-no-failover no
cluster-replica-lag-threshold 10
cluster-replica-validity-factor 10
cluster-migration-barrier 1
cluster-allow-replica-migration yes

2.3 Redis Sentinel配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Redis Sentinel配置文件
# /etc/redis/sentinel.conf

# Sentinel配置
port 26379
sentinel monitor mymaster 192.168.1.100 6379 2
sentinel auth-pass mymaster your_redis_password
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
sentinel deny-scripts-reconfig yes

# Sentinel日志
logfile /var/log/redis/sentinel.log
loglevel notice

# Sentinel网络配置
bind 0.0.0.0
protected-mode no

3. Redis高并发Java实现

3.1 Redis连接池配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
package com.example.redis.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.RedisStandaloneConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import io.lettuce.core.resource.ClientResources;
import io.lettuce.core.resource.DefaultClientResources;

import java.time.Duration;

/**
* Redis高并发配置类
* 提供高性能的Redis连接池和模板配置
*/
@Configuration
public class RedisHighConcurrencyConfig {

@Value("${redis.host:localhost}")
private String redisHost;

@Value("${redis.port:6379}")
private int redisPort;

@Value("${redis.password:}")
private String redisPassword;

@Value("${redis.database:0}")
private int redisDatabase;

@Value("${redis.pool.max-total:200}")
private int maxTotal;

@Value("${redis.pool.max-idle:50}")
private int maxIdle;

@Value("${redis.pool.min-idle:10}")
private int minIdle;

@Value("${redis.pool.max-wait-millis:3000}")
private long maxWaitMillis;

@Value("${redis.pool.test-on-borrow:true}")
private boolean testOnBorrow;

@Value("${redis.pool.test-on-return:false}")
private boolean testOnReturn;

@Value("${redis.pool.test-while-idle:true}")
private boolean testWhileIdle;

@Value("${redis.timeout:2000}")
private int timeout;

/**
* 创建Redis连接工厂
* 使用Lettuce连接池提供高性能连接
*/
@Bean
public RedisConnectionFactory redisConnectionFactory() {
// Redis服务器配置
RedisStandaloneConfiguration redisConfig = new RedisStandaloneConfiguration();
redisConfig.setHostName(redisHost);
redisConfig.setPort(redisPort);
redisConfig.setPassword(redisPassword);
redisConfig.setDatabase(redisDatabase);

// 连接池配置
GenericObjectPoolConfig poolConfig = new GenericObjectPoolConfig();
poolConfig.setMaxTotal(maxTotal);
poolConfig.setMaxIdle(maxIdle);
poolConfig.setMinIdle(minIdle);
poolConfig.setMaxWaitMillis(maxWaitMillis);
poolConfig.setTestOnBorrow(testOnBorrow);
poolConfig.setTestOnReturn(testOnReturn);
poolConfig.setTestWhileIdle(testWhileIdle);
poolConfig.setTimeBetweenEvictionRunsMillis(30000);
poolConfig.setMinEvictableIdleTimeMillis(60000);
poolConfig.setNumTestsPerEvictionRun(3);
poolConfig.setBlockWhenExhausted(true);

// Lettuce客户端配置
LettucePoolingClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
.poolConfig(poolConfig)
.commandTimeout(Duration.ofMillis(timeout))
.shutdownTimeout(Duration.ofMillis(100))
.build();

// 创建连接工厂
return new LettuceConnectionFactory(redisConfig, clientConfig);
}

/**
* 创建Redis模板
* 配置序列化器提供高性能序列化
*/
@Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);

// 设置序列化器
StringRedisSerializer stringSerializer = new StringRedisSerializer();
GenericJackson2JsonRedisSerializer jsonSerializer = new GenericJackson2JsonRedisSerializer();

// Key序列化
template.setKeySerializer(stringSerializer);
template.setHashKeySerializer(stringSerializer);

// Value序列化
template.setValueSerializer(jsonSerializer);
template.setHashValueSerializer(jsonSerializer);

// 设置默认序列化器
template.setDefaultSerializer(jsonSerializer);

// 启用事务支持
template.setEnableTransactionSupport(true);

template.afterPropertiesSet();
return template;
}

/**
* 创建Redis字符串模板
* 专门用于字符串操作的高性能模板
*/
@Bean
public RedisTemplate<String, String> stringRedisTemplate(RedisConnectionFactory connectionFactory) {
RedisTemplate<String, String> template = new RedisTemplate<>();
template.setConnectionFactory(connectionFactory);

// 使用字符串序列化器
StringRedisSerializer stringSerializer = new StringRedisSerializer();
template.setKeySerializer(stringSerializer);
template.setValueSerializer(stringSerializer);
template.setHashKeySerializer(stringSerializer);
template.setHashValueSerializer(stringSerializer);

template.afterPropertiesSet();
return template;
}

/**
* 创建客户端资源
* 优化客户端性能
*/
@Bean(destroyMethod = "shutdown")
public ClientResources clientResources() {
return DefaultClientResources.builder()
.ioThreadPoolSize(4)
.computationThreadPoolSize(4)
.build();
}
}

3.2 Redis高并发服务类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
package com.example.redis.service;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Service;
import org.springframework.util.CollectionUtils;

import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;

/**
* Redis高并发服务类
* 提供高性能的Redis操作服务
*/
@Service
public class RedisHighConcurrencyService {

@Autowired
private RedisTemplate<String, Object> redisTemplate;

@Autowired
private StringRedisTemplate stringRedisTemplate;

// 分布式锁
private final ReentrantLock lock = new ReentrantLock();

/**
* 设置缓存
* 支持过期时间设置
*/
public boolean set(String key, Object value) {
try {
redisTemplate.opsForValue().set(key, value);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 设置缓存并指定过期时间
*/
public boolean set(String key, Object value, long time, TimeUnit timeUnit) {
try {
redisTemplate.opsForValue().set(key, value, time, timeUnit);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 设置缓存并指定过期时间(秒)
*/
public boolean set(String key, Object value, long time) {
return set(key, value, time, TimeUnit.SECONDS);
}

/**
* 获取缓存
*/
public Object get(String key) {
try {
return redisTemplate.opsForValue().get(key);
} catch (Exception e) {
e.printStackTrace();
return null;
}
}

/**
* 获取缓存并指定类型
*/
@SuppressWarnings("unchecked")
public <T> T get(String key, Class<T> clazz) {
try {
Object value = redisTemplate.opsForValue().get(key);
if (value != null && clazz.isInstance(value)) {
return (T) value;
}
return null;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}

/**
* 删除缓存
*/
public boolean delete(String key) {
try {
return Boolean.TRUE.equals(redisTemplate.delete(key));
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 批量删除缓存
*/
public boolean delete(Collection<String> keys) {
try {
if (CollectionUtils.isEmpty(keys)) {
return false;
}
Long deleted = redisTemplate.delete(keys);
return deleted != null && deleted > 0;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 判断缓存是否存在
*/
public boolean hasKey(String key) {
try {
return Boolean.TRUE.equals(redisTemplate.hasKey(key));
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 设置过期时间
*/
public boolean expire(String key, long time, TimeUnit timeUnit) {
try {
return Boolean.TRUE.equals(redisTemplate.expire(key, time, timeUnit));
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 设置过期时间(秒)
*/
public boolean expire(String key, long time) {
return expire(key, time, TimeUnit.SECONDS);
}

/**
* 获取过期时间
*/
public long getExpire(String key) {
try {
Long expire = redisTemplate.getExpire(key);
return expire != null ? expire : -1;
} catch (Exception e) {
e.printStackTrace();
return -1;
}
}

/**
* 递增
*/
public long increment(String key) {
try {
Long value = redisTemplate.opsForValue().increment(key);
return value != null ? value : 0;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}

/**
* 递增指定值
*/
public long increment(String key, long delta) {
try {
Long value = redisTemplate.opsForValue().increment(key, delta);
return value != null ? value : 0;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}

/**
* 递减
*/
public long decrement(String key) {
try {
Long value = redisTemplate.opsForValue().decrement(key);
return value != null ? value : 0;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}

/**
* 递减指定值
*/
public long decrement(String key, long delta) {
try {
Long value = redisTemplate.opsForValue().decrement(key, delta);
return value != null ? value : 0;
} catch (Exception e) {
e.printStackTrace();
return 0;
}
}

/**
* 获取所有匹配的key
*/
public Set<String> keys(String pattern) {
try {
return redisTemplate.keys(pattern);
} catch (Exception e) {
e.printStackTrace();
return new HashSet<>();
}
}

/**
* 批量获取
*/
public List<Object> multiGet(Collection<String> keys) {
try {
if (CollectionUtils.isEmpty(keys)) {
return new ArrayList<>();
}
return redisTemplate.opsForValue().multiGet(keys);
} catch (Exception e) {
e.printStackTrace();
return new ArrayList<>();
}
}

/**
* 批量设置
*/
public boolean multiSet(Map<String, Object> map) {
try {
if (map == null || map.isEmpty()) {
return false;
}
redisTemplate.opsForValue().multiSet(map);
return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 分布式锁
*/
public boolean lock(String key, String value, long time, TimeUnit timeUnit) {
try {
Boolean result = redisTemplate.opsForValue().setIfAbsent(key, value, time, timeUnit);
return Boolean.TRUE.equals(result);
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 释放分布式锁
*/
public boolean unlock(String key, String value) {
try {
String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
Object result = redisTemplate.execute((org.springframework.data.redis.core.RedisCallback<Object>) connection ->
connection.eval(script.getBytes(), org.springframework.data.redis.connection.ReturnType.INTEGER, 1, key.getBytes(), value.getBytes()));
return result != null && (Long) result > 0;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 获取Redis信息
*/
public Map<String, Object> getRedisInfo() {
Map<String, Object> info = new HashMap<>();
try {
Properties properties = redisTemplate.getConnectionFactory().getConnection().info();
for (String key : properties.stringPropertyNames()) {
info.put(key, properties.getProperty(key));
}
} catch (Exception e) {
e.printStackTrace();
}
return info;
}

/**
* 获取Redis统计信息
*/
public Map<String, Object> getRedisStats() {
Map<String, Object> stats = new HashMap<>();
try {
Properties info = redisTemplate.getConnectionFactory().getConnection().info();

// 连接数统计
stats.put("connected_clients", info.getProperty("connected_clients"));
stats.put("total_connections_received", info.getProperty("total_connections_received"));
stats.put("rejected_connections", info.getProperty("rejected_connections"));

// 命令统计
stats.put("total_commands_processed", info.getProperty("total_commands_processed"));
stats.put("instantaneous_ops_per_sec", info.getProperty("instantaneous_ops_per_sec"));

// 内存统计
stats.put("used_memory", info.getProperty("used_memory"));
stats.put("used_memory_human", info.getProperty("used_memory_human"));
stats.put("used_memory_peak", info.getProperty("used_memory_peak"));
stats.put("used_memory_peak_human", info.getProperty("used_memory_peak_human"));

// 持久化统计
stats.put("rdb_last_save_time", info.getProperty("rdb_last_save_time"));
stats.put("rdb_changes_since_last_save", info.getProperty("rdb_changes_since_last_save"));
stats.put("aof_enabled", info.getProperty("aof_enabled"));

// 复制统计
stats.put("role", info.getProperty("role"));
stats.put("master_repl_offset", info.getProperty("master_repl_offset"));
stats.put("slave_repl_offset", info.getProperty("slave_repl_offset"));

} catch (Exception e) {
e.printStackTrace();
}
return stats;
}
}

3.3 Redis缓存策略服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
package com.example.redis.service;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Service;
import org.springframework.util.CollectionUtils;

import java.util.*;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantLock;

/**
* Redis缓存策略服务类
* 实现多种缓存策略以支撑高并发流量
*/
@Service
public class RedisCacheStrategyService {

@Autowired
private RedisTemplate<String, Object> redisTemplate;

private final ReentrantLock lock = new ReentrantLock();

/**
* Cache-Aside策略
* 应用程序负责管理缓存
*/
public Object cacheAside(String key, CacheLoader loader, long expireTime, TimeUnit timeUnit) {
try {
// 1. 先从缓存获取
Object value = redisTemplate.opsForValue().get(key);
if (value != null) {
return value;
}

// 2. 缓存未命中,从数据源加载
value = loader.load();
if (value != null) {
// 3. 将数据写入缓存
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);
}
return value;
} catch (Exception e) {
e.printStackTrace();
return loader.load();
}
}

/**
* Write-Through策略
* 同时写入缓存和数据库
*/
public boolean writeThrough(String key, Object value, CacheWriter writer, long expireTime, TimeUnit timeUnit) {
try {
// 1. 写入数据库
boolean dbResult = writer.write(value);
if (dbResult) {
// 2. 写入缓存
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);
return true;
}
return false;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* Write-Behind策略
* 先写入缓存,异步写入数据库
*/
public boolean writeBehind(String key, Object value, CacheWriter writer, long expireTime, TimeUnit timeUnit) {
try {
// 1. 先写入缓存
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);

// 2. 异步写入数据库
new Thread(() -> {
try {
writer.write(value);
} catch (Exception e) {
e.printStackTrace();
}
}).start();

return true;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* Refresh-Ahead策略
* 提前刷新即将过期的缓存
*/
public Object refreshAhead(String key, CacheLoader loader, long expireTime, TimeUnit timeUnit, long refreshTime) {
try {
// 1. 获取缓存
Object value = redisTemplate.opsForValue().get(key);
if (value != null) {
// 2. 检查是否需要刷新
long ttl = redisTemplate.getExpire(key, timeUnit);
if (ttl <= refreshTime) {
// 3. 异步刷新缓存
new Thread(() -> {
try {
Object newValue = loader.load();
if (newValue != null) {
redisTemplate.opsForValue().set(key, newValue, expireTime, timeUnit);
}
} catch (Exception e) {
e.printStackTrace();
}
}).start();
}
return value;
}

// 4. 缓存未命中,加载数据
value = loader.load();
if (value != null) {
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);
}
return value;
} catch (Exception e) {
e.printStackTrace();
return loader.load();
}
}

/**
* 多级缓存策略
* L1: 本地缓存, L2: Redis缓存
*/
public Object multiLevelCache(String key, CacheLoader loader, long expireTime, TimeUnit timeUnit) {
try {
// 1. 尝试从L1缓存获取
Object value = getFromL1Cache(key);
if (value != null) {
return value;
}

// 2. 尝试从L2缓存获取
value = redisTemplate.opsForValue().get(key);
if (value != null) {
// 3. 写入L1缓存
setToL1Cache(key, value);
return value;
}

// 4. 从数据源加载
value = loader.load();
if (value != null) {
// 5. 写入L2缓存
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);
// 6. 写入L1缓存
setToL1Cache(key, value);
}
return value;
} catch (Exception e) {
e.printStackTrace();
return loader.load();
}
}

/**
* 缓存预热
*/
public void cacheWarmup(Map<String, CacheLoader> keyLoaders, long expireTime, TimeUnit timeUnit) {
try {
for (Map.Entry<String, CacheLoader> entry : keyLoaders.entrySet()) {
String key = entry.getKey();
CacheLoader loader = entry.getValue();

// 异步预热
new Thread(() -> {
try {
Object value = loader.load();
if (value != null) {
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);
}
} catch (Exception e) {
e.printStackTrace();
}
}).start();
}
} catch (Exception e) {
e.printStackTrace();
}
}

/**
* 缓存雪崩防护
* 使用随机过期时间防止缓存同时失效
*/
public Object cacheAvalancheProtection(String key, CacheLoader loader, long baseExpireTime, TimeUnit timeUnit) {
try {
// 1. 获取缓存
Object value = redisTemplate.opsForValue().get(key);
if (value != null) {
return value;
}

// 2. 使用分布式锁防止缓存击穿
String lockKey = "lock:" + key;
String lockValue = UUID.randomUUID().toString();

if (lock(lockKey, lockValue, 10, TimeUnit.SECONDS)) {
try {
// 3. 双重检查
value = redisTemplate.opsForValue().get(key);
if (value != null) {
return value;
}

// 4. 加载数据
value = loader.load();
if (value != null) {
// 5. 设置随机过期时间
long randomExpireTime = baseExpireTime + (long) (Math.random() * baseExpireTime * 0.1);
redisTemplate.opsForValue().set(key, value, randomExpireTime, timeUnit);
}
return value;
} finally {
unlock(lockKey, lockValue);
}
} else {
// 6. 获取锁失败,等待一段时间后重试
Thread.sleep(100);
return redisTemplate.opsForValue().get(key);
}
} catch (Exception e) {
e.printStackTrace();
return loader.load();
}
}

/**
* 缓存穿透防护
* 使用布隆过滤器防止无效请求
*/
public Object cachePenetrationProtection(String key, CacheLoader loader, long expireTime, TimeUnit timeUnit) {
try {
// 1. 检查布隆过滤器
if (!bloomFilterContains(key)) {
return null;
}

// 2. 获取缓存
Object value = redisTemplate.opsForValue().get(key);
if (value != null) {
return value;
}

// 3. 加载数据
value = loader.load();
if (value != null) {
redisTemplate.opsForValue().set(key, value, expireTime, timeUnit);
} else {
// 4. 设置空值缓存防止穿透
redisTemplate.opsForValue().set(key, "", 60, TimeUnit.SECONDS);
}
return value;
} catch (Exception e) {
e.printStackTrace();
return loader.load();
}
}

/**
* 获取L1缓存
*/
private Object getFromL1Cache(String key) {
// 这里可以实现本地缓存逻辑
// 例如使用Caffeine或Guava Cache
return null;
}

/**
* 设置L1缓存
*/
private void setToL1Cache(String key, Object value) {
// 这里可以实现本地缓存逻辑
// 例如使用Caffeine或Guava Cache
}

/**
* 分布式锁
*/
private boolean lock(String key, String value, long time, TimeUnit timeUnit) {
try {
Boolean result = redisTemplate.opsForValue().setIfAbsent(key, value, time, timeUnit);
return Boolean.TRUE.equals(result);
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 释放分布式锁
*/
private boolean unlock(String key, String value) {
try {
String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
Object result = redisTemplate.execute((org.springframework.data.redis.core.RedisCallback<Object>) connection ->
connection.eval(script.getBytes(), org.springframework.data.redis.connection.ReturnType.INTEGER, 1, key.getBytes(), value.getBytes()));
return result != null && (Long) result > 0;
} catch (Exception e) {
e.printStackTrace();
return false;
}
}

/**
* 布隆过滤器检查
*/
private boolean bloomFilterContains(String key) {
// 这里可以实现布隆过滤器逻辑
// 例如使用Redis的布隆过滤器模块
return true;
}

/**
* 缓存加载器接口
*/
@FunctionalInterface
public interface CacheLoader {
Object load();
}

/**
* 缓存写入器接口
*/
@FunctionalInterface
public interface CacheWriter {
boolean write(Object value);
}
}

4. Redis高并发监控和调优

4.1 Redis监控脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
#!/bin/bash
# Redis高并发监控脚本
# /opt/scripts/redis_monitor.sh

# 配置变量
REDIS_HOST="localhost"
REDIS_PORT="6379"
REDIS_PASSWORD="your_redis_password"
LOG_FILE="/var/log/redis_monitor.log"
ALERT_EMAIL="admin@example.com"

# 阈值配置
CPU_THRESHOLD=80
MEMORY_THRESHOLD=80
CONNECTION_THRESHOLD=8000
QPS_THRESHOLD=50000
LATENCY_THRESHOLD=1000

# 日志函数
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE
}

# 获取Redis信息
get_redis_info() {
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD info
}

# 获取Redis统计信息
get_redis_stats() {
local info=$(get_redis_info)

# 连接数统计
local connected_clients=$(echo "$info" | grep "connected_clients:" | cut -d: -f2 | tr -d '\r')
local total_connections_received=$(echo "$info" | grep "total_connections_received:" | cut -d: -f2 | tr -d '\r')
local rejected_connections=$(echo "$info" | grep "rejected_connections:" | cut -d: -f2 | tr -d '\r')

# 命令统计
local total_commands_processed=$(echo "$info" | grep "total_commands_processed:" | cut -d: -f2 | tr -d '\r')
local instantaneous_ops_per_sec=$(echo "$info" | grep "instantaneous_ops_per_sec:" | cut -d: -f2 | tr -d '\r')

# 内存统计
local used_memory=$(echo "$info" | grep "used_memory:" | cut -d: -f2 | tr -d '\r')
local used_memory_human=$(echo "$info" | grep "used_memory_human:" | cut -d: -f2 | tr -d '\r')
local used_memory_peak=$(echo "$info" | grep "used_memory_peak:" | cut -d: -f2 | tr -d '\r')
local used_memory_peak_human=$(echo "$info" | grep "used_memory_peak_human:" | cut -d: -f2 | tr -d '\r')

# 持久化统计
local rdb_last_save_time=$(echo "$info" | grep "rdb_last_save_time:" | cut -d: -f2 | tr -d '\r')
local rdb_changes_since_last_save=$(echo "$info" | grep "rdb_changes_since_last_save:" | cut -d: -f2 | tr -d '\r')
local aof_enabled=$(echo "$info" | grep "aof_enabled:" | cut -d: -f2 | tr -d '\r')

# 复制统计
local role=$(echo "$info" | grep "role:" | cut -d: -f2 | tr -d '\r')
local master_repl_offset=$(echo "$info" | grep "master_repl_offset:" | cut -d: -f2 | tr -d '\r')
local slave_repl_offset=$(echo "$info" | grep "slave_repl_offset:" | cut -d: -f2 | tr -d '\r')

echo "$connected_clients $total_connections_received $rejected_connections $total_commands_processed $instantaneous_ops_per_sec $used_memory $used_memory_human $used_memory_peak $used_memory_peak_human $rdb_last_save_time $rdb_changes_since_last_save $aof_enabled $role $master_repl_offset $slave_repl_offset"
}

# 获取系统资源信息
get_system_info() {
local cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | awk -F'%' '{print $1}')
local memory_usage=$(free | grep Mem | awk '{printf "%.2f", $3/$2 * 100.0}')
local load_avg=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | sed 's/,//')

echo "$cpu_usage $memory_usage $load_avg"
}

# 检查Redis健康状态
check_redis_health() {
local stats=$(get_redis_stats)
local connected_clients=$(echo $stats | awk '{print $1}')
local rejected_connections=$(echo $stats | awk '{print $3}')
local instantaneous_ops_per_sec=$(echo $stats | awk '{print $5}')
local used_memory=$(echo $stats | awk '{print $6}')

local issues=()

# 检查连接数
if [ $connected_clients -gt $CONNECTION_THRESHOLD ]; then
issues+=("Redis连接数过高: $connected_clients > $CONNECTION_THRESHOLD")
fi

# 检查拒绝连接数
if [ $rejected_connections -gt 0 ]; then
issues+=("Redis拒绝连接数: $rejected_connections")
fi

# 检查QPS
if [ $instantaneous_ops_per_sec -gt $QPS_THRESHOLD ]; then
issues+=("Redis QPS过高: $instantaneous_ops_per_sec > $QPS_THRESHOLD")
fi

# 检查内存使用
local memory_mb=$((used_memory / 1024 / 1024))
if [ $memory_mb -gt 6000 ]; then # 假设8GB内存,75%阈值
issues+=("Redis内存使用过高: ${memory_mb}MB")
fi

if [ ${#issues[@]} -gt 0 ]; then
echo "WARNING: ${issues[*]}"
return 1
fi

echo "HEALTHY: Redis运行正常"
return 0
}

# 检查系统资源
check_system_resources() {
local system_info=$(get_system_info)
local cpu_usage=$(echo $system_info | awk '{print $1}')
local memory_usage=$(echo $system_info | awk '{print $2}')
local load_avg=$(echo $system_info | awk '{print $3}')

local issues=()

# 检查CPU使用率
if (( $(echo "$cpu_usage > $CPU_THRESHOLD" | bc -l) )); then
issues+=("系统CPU使用率过高: ${cpu_usage}%")
fi

# 检查内存使用率
if (( $(echo "$memory_usage > $MEMORY_THRESHOLD" | bc -l) )); then
issues+=("系统内存使用率过高: ${memory_usage}%")
fi

# 检查系统负载
if (( $(echo "$load_avg > 5.0" | bc -l) )); then
issues+=("系统负载过高: $load_avg")
fi

if [ ${#issues[@]} -gt 0 ]; then
echo "WARNING: ${issues[*]}"
return 1
fi

echo "HEALTHY: 系统资源使用正常"
return 0
}

# 发送告警邮件
send_alert() {
local message="$1"
echo "$message" | mail -s "Redis Alert" $ALERT_EMAIL
log_message "ALERT: $message"
}

# 生成监控报告
generate_report() {
local stats=$(get_redis_stats)
local system_info=$(get_system_info)

local connected_clients=$(echo $stats | awk '{print $1}')
local total_connections_received=$(echo $stats | awk '{print $2}')
local rejected_connections=$(echo $stats | awk '{print $3}')
local total_commands_processed=$(echo $stats | awk '{print $4}')
local instantaneous_ops_per_sec=$(echo $stats | awk '{print $5}')
local used_memory_human=$(echo $stats | awk '{print $7}')
local used_memory_peak_human=$(echo $stats | awk '{print $9}')
local role=$(echo $stats | awk '{print $13}')

local cpu_usage=$(echo $system_info | awk '{print $1}')
local memory_usage=$(echo $system_info | awk '{print $2}')
local load_avg=$(echo $system_info | awk '{print $3}')

cat << EOF

=== Redis高并发监控报告 ===
生成时间: $(date)

=== Redis状态 ===
角色: $role
连接数: $connected_clients
总连接数: $total_connections_received
拒绝连接数: $rejected_connections
总命令数: $total_commands_processed
当前QPS: $instantaneous_ops_per_sec
内存使用: $used_memory_human
内存峰值: $used_memory_peak_human

=== 系统状态 ===
CPU使用率: ${cpu_usage}%
内存使用率: ${memory_usage}%
系统负载: $load_avg

=== 健康检查 ===
Redis健康状态: $(check_redis_health)
系统资源状态: $(check_system_resources)

EOF
}

# 主监控函数
monitor_redis() {
log_message "Starting Redis high concurrency monitoring"

# 检查Redis健康状态
local redis_health=$(check_redis_health)
if [ $? -ne 0 ]; then
send_alert "$redis_health"
fi

# 检查系统资源
local system_health=$(check_system_resources)
if [ $? -ne 0 ]; then
send_alert "$system_health"
fi

# 生成监控报告
generate_report >> $LOG_FILE

log_message "Redis high concurrency monitoring completed"
}

# 主函数
main() {
case "$1" in
"monitor")
monitor_redis
;;
"report")
generate_report
;;
"health")
check_redis_health
;;
"resources")
check_system_resources
;;
"stats")
get_redis_stats
;;
*)
echo "Usage: $0 {monitor|report|health|resources|stats}"
exit 1
;;
esac
}

# 执行主函数
main "$@"

4.2 Redis性能测试脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
#!/bin/bash
# Redis性能测试脚本
# /opt/scripts/redis_performance_test.sh

# 配置变量
REDIS_HOST="localhost"
REDIS_PORT="6379"
REDIS_PASSWORD="your_redis_password"
LOG_FILE="/var/log/redis_performance_test.log"
TEST_DURATION=60
CONCURRENT_CLIENTS=1000
REQUEST_COUNT=100000

# 日志函数
log_message() {
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a $LOG_FILE
}

# 检查依赖
check_dependencies() {
local missing_deps=()

if ! command -v redis-cli &> /dev/null; then
missing_deps+=("redis-tools")
fi

if ! command -v redis-benchmark &> /dev/null; then
missing_deps+=("redis-tools")
fi

if ! command -v curl &> /dev/null; then
missing_deps+=("curl")
fi

if [ ${#missing_deps[@]} -gt 0 ]; then
log_message "Missing dependencies: ${missing_deps[*]}"
log_message "Please install: apt-get install ${missing_deps[*]}"
return 1
fi

return 0
}

# 基础性能测试
run_basic_test() {
log_message "Running basic Redis performance test"

# SET操作测试
log_message "Testing SET operations..."
redis-benchmark -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD -t set -n $REQUEST_COUNT -c $CONCURRENT_CLIENTS > /tmp/redis_set_test.txt 2>&1

# GET操作测试
log_message "Testing GET operations..."
redis-benchmark -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD -t get -n $REQUEST_COUNT -c $CONCURRENT_CLIENTS > /tmp/redis_get_test.txt 2>&1

# 混合操作测试
log_message "Testing mixed operations..."
redis-benchmark -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD -t set,get -n $REQUEST_COUNT -c $CONCURRENT_CLIENTS > /tmp/redis_mixed_test.txt 2>&1

# 分析结果
analyze_test_results
}

# 分析测试结果
analyze_test_results() {
log_message "Analyzing test results..."

# SET操作结果
if [ -f /tmp/redis_set_test.txt ]; then
local set_rps=$(grep "requests per second" /tmp/redis_set_test.txt | awk '{print $1}')
local set_latency=$(grep "avg latency" /tmp/redis_set_test.txt | awk '{print $3}')
log_message "SET Test Results - RPS: $set_rps, Avg Latency: ${set_latency}ms"
fi

# GET操作结果
if [ -f /tmp/redis_get_test.txt ]; then
local get_rps=$(grep "requests per second" /tmp/redis_get_test.txt | awk '{print $1}')
local get_latency=$(grep "avg latency" /tmp/redis_get_test.txt | awk '{print $3}')
log_message "GET Test Results - RPS: $get_rps, Avg Latency: ${get_latency}ms"
fi

# 混合操作结果
if [ -f /tmp/redis_mixed_test.txt ]; then
local mixed_rps=$(grep "requests per second" /tmp/redis_mixed_test.txt | awk '{print $1}')
local mixed_latency=$(grep "avg latency" /tmp/redis_mixed_test.txt | awk '{print $3}')
log_message "Mixed Test Results - RPS: $mixed_rps, Avg Latency: ${mixed_latency}ms"
fi
}

# 压力测试
run_stress_test() {
log_message "Running Redis stress test"

# 高并发测试
log_message "Testing high concurrency..."
redis-benchmark -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD -t set,get -n $((REQUEST_COUNT * 2)) -c $((CONCURRENT_CLIENTS * 2)) -d 1024 > /tmp/redis_stress_test.txt 2>&1

# 大数据量测试
log_message "Testing large data..."
redis-benchmark -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD -t set,get -n $REQUEST_COUNT -c $CONCURRENT_CLIENTS -d 10240 > /tmp/redis_large_data_test.txt 2>&1

# 分析压力测试结果
analyze_stress_test_results
}

# 分析压力测试结果
analyze_stress_test_results() {
log_message "Analyzing stress test results..."

# 高并发测试结果
if [ -f /tmp/redis_stress_test.txt ]; then
local stress_rps=$(grep "requests per second" /tmp/redis_stress_test.txt | awk '{print $1}')
local stress_latency=$(grep "avg latency" /tmp/redis_stress_test.txt | awk '{print $3}')
log_message "Stress Test Results - RPS: $stress_rps, Avg Latency: ${stress_latency}ms"
fi

# 大数据量测试结果
if [ -f /tmp/redis_large_data_test.txt ]; then
local large_data_rps=$(grep "requests per second" /tmp/redis_large_data_test.txt | awk '{print $1}')
local large_data_latency=$(grep "avg latency" /tmp/redis_large_data_test.txt | awk '{print $3}')
log_message "Large Data Test Results - RPS: $large_data_rps, Avg Latency: ${large_data_latency}ms"
fi
}

# 延迟测试
run_latency_test() {
log_message "Running Redis latency test"

# 延迟测试
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD --latency -i 1 -d $TEST_DURATION > /tmp/redis_latency_test.txt 2>&1 &
local latency_pid=$!

# 等待测试完成
wait $latency_pid

# 分析延迟结果
if [ -f /tmp/redis_latency_test.txt ]; then
local avg_latency=$(grep "avg latency" /tmp/redis_latency_test.txt | awk '{print $4}')
local max_latency=$(grep "max latency" /tmp/redis_latency_test.txt | awk '{print $4}')
log_message "Latency Test Results - Avg: ${avg_latency}ms, Max: ${max_latency}ms"
fi
}

# 内存测试
run_memory_test() {
log_message "Running Redis memory test"

# 获取测试前内存使用
local before_memory=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD info memory | grep "used_memory:" | cut -d: -f2 | tr -d '\r')

# 写入大量数据
log_message "Writing large amount of data..."
for i in $(seq 1 10000); do
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD set "test_key_$i" "test_value_$i" > /dev/null 2>&1
done

# 获取测试后内存使用
local after_memory=$(redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD info memory | grep "used_memory:" | cut -d: -f2 | tr -d '\r')

# 计算内存增长
local memory_increase=$((after_memory - before_memory))
local memory_increase_mb=$((memory_increase / 1024 / 1024))

log_message "Memory Test Results - Memory Increase: ${memory_increase_mb}MB"

# 清理测试数据
log_message "Cleaning up test data..."
for i in $(seq 1 10000); do
redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD del "test_key_$i" > /dev/null 2>&1
done
}

# 主测试函数
run_performance_test() {
log_message "Starting Redis performance test"

# 检查依赖
if ! check_dependencies; then
return 1
fi

# 检查Redis连接
if ! redis-cli -h $REDIS_HOST -p $REDIS_PORT -a $REDIS_PASSWORD ping > /dev/null 2>&1; then
log_message "ERROR: Cannot connect to Redis"
return 1
fi

# 运行各种测试
run_basic_test
run_stress_test
run_latency_test
run_memory_test

log_message "Redis performance test completed"

return 0
}

# 主函数
main() {
case "$1" in
"test")
run_performance_test
;;
"basic")
run_basic_test
;;
"stress")
run_stress_test
;;
"latency")
run_latency_test
;;
"memory")
run_memory_test
;;
*)
echo "Usage: $0 {test|basic|stress|latency|memory}"
echo " test - Run full performance test"
echo " basic - Run basic performance test"
echo " stress - Run stress test"
echo " latency - Run latency test"
echo " memory - Run memory test"
exit 1
;;
esac
}

# 执行主函数
main "$@"

5. 总结

5.1 Redis高并发流量支撑总结

  1. 架构优势: Redis的单线程事件驱动架构适合高并发场景
  2. 内存存储: 内存存储提供极快的访问速度
  3. 连接池: 合理的连接池配置提升并发处理能力
  4. 缓存策略: 多种缓存策略适应不同业务场景
  5. 集群部署: 集群模式提供水平扩展能力
  6. 监控调优: 持续监控和调优确保系统稳定

5.2 Redis高并发优化要点

  • 连接池配置: 合理设置连接池参数
  • 内存管理: 优化内存使用和淘汰策略
  • 持久化策略: 选择合适的持久化方式
  • 网络优化: 优化网络配置和超时设置
  • 数据结构: 选择合适的数据结构
  • 集群部署: 使用集群模式提升性能

5.3 最佳实践建议

  • 监控系统: 实时监控Redis性能指标
  • 压力测试: 定期进行压力测试
  • 配置优化: 根据实际负载优化配置
  • 集群部署: 使用集群模式提升可用性
  • 故障处理: 建立完善的故障处理机制
  • 容量规划: 合理规划系统容量

通过本文的Redis高并发流量支撑指南,您可以掌握Redis的高并发处理原理、优化策略、部署方案以及在企业级应用中的最佳实践,构建高效、稳定、可扩展的Redis系统!