0
点赞
收藏
分享

微信扫一扫

adb手机调试常用命令

落花时节又逢君to 2023-11-19 阅读 49
redis

一、主从复制(replica)(不推荐)

介绍

基本操作

基本操作命令

info replication   查看复制节点的主从关系和配置信息
    
replicaof/slaveof 主库IP 主库端口   replicaof/slaveof这两个一样,一般写入进redis.conf配置文件内,在运行期间修改slave节点的信息,如果该数据库已经某个数据库的从数据库,那么会停止和原主数据库的同步关系转而和新的主数据库同步

replicaof/slaveof no one      使当前数据库停止与其他数据库的同步,升级为主数据库

配置一个master,两个slave

image.png
外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传
image.png
image.png

主master6001.conf
#1 Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
daemonize yes
#2 注释只能本地连接
#bind 127.0.0.1
#3 默认开启保护模式,如果没有设置密码或者没有 bind 配置,我只允许在本机连接我,其它机器无法连接。
protected-mode no
port 6001
dir /redis-learn/redis-7.0.9/conf/replica/6001
pidfile redis_6001.pid
logfile  "6001.log"
requirepass xgm@2023
dbfilename dump6001.rdb
#开启aof持久增量存储
appendonly yes
appendfilename "appendonly6001.aof"


从slave16002.conf(拜大哥主机)
#1 Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
daemonize yes
#2 注释只能本地连接
#bind 127.0.0.1
#3 默认开启保护模式,如果没有设置密码或者没有 bind 配置,我只允许在本机连接我,其它机器无法连接。
protected-mode no
port 6002
dir /redis-learn/redis-7.0.9/conf/replica/6002
pidfile redis_6001.pid
logfile  "6002.log"
requirepass xgm@2023
dbfilename dump6001.rdb
#开启aof持久增量存储
appendonly yes
appendfilename "appendonly6002.aof"
#主从复制,主机ip端口
replicaof 172.16.64.21  6001
#主机密码
masterauth xgm@2023

从slave26003.conf(拜大哥主机)
#1 Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程
daemonize yes
#2 注释只能本地连接
#bind 127.0.0.1
#3 默认开启保护模式,如果没有设置密码或者没有 bind 配置,我只允许在本机连接我,其它机器无法连接。
protected-mode no
port 6003
dir /redis-learn/redis-7.0.9/conf/replica/6003
pidfile redis_6003.pid
logfile  "6003.log"
requirepass xgm@2023
dbfilename dump6003.rdb
#开启aof持久增量存储
appendonly yes
appendfilename "appendonly6003.aof"
#主从复制,主机ip端口
replicaof 172.16.64.21  6001
#主机密码
masterauth xgm@2023


注意防火墙配置
启动
查看主从关系

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

主机
image.png
从机
image.png

复制原理

slave启动,同步初请

  • slave启动成功连接到master后会发送一个sync命令
  • slave首次全新连接master,一次完全同步(全量复制)将被自动执行,slave自身原有数据会被master数据覆盖清除

首次连接,全量复制

  • master节点收到sync命令后会在后台开始保存快照(即RDB持久化,主从复制会触发RDB),同时收集所有接收到的用于修改数据集命令缓存起来,master节点执行RDB持久化后,master将rdb快照文件和缓存的命令发送到所有slave,已完成一次完全同步
  • 而slave服务在接收到数据库文件数据后,将其存盘并加载到内存中,从而完成复制初始化

心跳持续,保持通信

  • repl-ping-replica-period 10
  • master发出PING包的周期,默认是10秒

进入平稳,增量复制

  • master 继续将新的所有收集到的修改命令自动一次传给slave,完成同步

从机下线,重连续传

  • master 会检查backlog里面的offset,master和slave都会保存一个复制的offset怀有一个masterId
  • offset 是保存在backlog 中的。master只会把已经复制的offset后面的数据赋值给slave,类似断电续传

缺点

复制延时,信号衰减

由于所有的写操作都是先在Master上操作,然后同步更新到Slave上,所以从Master同步到Slave机器有一定的延迟,当系统很繁忙的时候,延迟问题会更加严重,Slave机器数量的增加也会使这个问题更加严重。

master挂了

  1. 默认情况下不会在slave节点自动重选一个master
  2. 需要人工干预

image.png

二、哨兵(sentinel)(不推荐)

介绍

作用

哨兵的四个功能

搭建

image.png

哨兵6004 sentinel.conf

bind 0.0.0.0
logfile "/redis-learn/redis-7.0.9/conf/sentinel/6004/6004.log"
pidfile /redis-learn/redis-7.0.9/conf/sentinel/6004/redis-sentinel-6004.pid
dir /redis-learn/redis-7.0.9/conf/sentinel/6004/
protected-mode no
daemonize no
port 6004
#设置要监控的redis master,2表示最少有几个哨兵认可客观下线,同意故障迁移的法定票数
sentinel monitor mymaster 172.16.64.21 6001 2
#sentinel访问master密码
sentinel auth-pass mymaster xgm@2023

哨兵6005 sentinel.conf

bind 0.0.0.0
logfile "/redis-learn/redis-7.0.9/conf/sentinel/6005/6005.log"
pidfile /redis-learn/redis-7.0.9/conf/sentinel/6005/redis-sentinel-6005.pid
dir /redis-learn/redis-7.0.9/conf/sentinel/6005/
protected-mode no
daemonize no
port 6005
#设置要监控的redis master,2表示最少有几个哨兵认可客观下线,同意故障迁移的法定票数
sentinel monitor mymaster 172.16.64.21 6001 2
#sentinel访问master密码
sentinel auth-pass mymaster xgm@2023

哨兵6006 sentinel.conf

bind 0.0.0.0
logfile "/redis-learn/redis-7.0.9/conf/sentinel/6006/6006.log"
pidfile /redis-learn/redis-7.0.9/conf/sentinel/6006/redis-sentinel-6006.pid
dir /redis-learn/redis-7.0.9/conf/sentinel/6006/
protected-mode no
daemonize no
port 6006
#设置要监控的redis master,2表示最少有几个哨兵认可客观下线,同意故障迁移的法定票数
sentinel monitor mymaster 172.16.64.21 6001 2
#sentinel访问master密码
sentinel auth-pass mymaster xgm@2023

image.png

主从复制一主2从

修改6001.conf

masterauth xgm@2023

启动主从复制集群

启动三个哨兵

故障迁移演示

查看集群关系,此时6001是master
外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传
关闭master 6001
image.png
image.png
结论

重启6001
外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传
结论

哨兵选举原理

image.png
image.png

SDOWN主观下线

image.png

ODOWN客观下线

image.png

选举出领导者哨兵

image.png

  • 由领导者节点开始推动故障切换并选出一个新master

    • 新主登基
      • 某个slave 备选成为新 master
    • 群臣俯首
      • 一朝天子一朝臣,重新认老大
    • 旧主拜服
      • 老master回来也得怂
  • 以上的failover都是sentinel自己独立完成,完全无需人工干预

使用建议
  • 哨兵节点的数量应为多个,哨兵本身应该集群,保证高可用
  • 哨兵节点的数量应该是奇数个
  • 各个哨兵节点的配置应该一致
  • 如果哨兵节点部署在Docker等容器里,要注意端口的正确映射
  • 哨兵集群+主从复制,并不能保证数据零丢失

springboot使用

踩坑:

读写分离配置

pom文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.17</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.redis</groupId>
    <artifactId>redis-sentinel</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>redis-sentinel</name>
    <description>redis-sentinel</description>
    <properties>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>com.mysql</groupId>
            <artifactId>mysql-connector-j</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <excludes>
                        <exclude>
                            <groupId>org.projectlombok</groupId>
                            <artifactId>lombok</artifactId>
                        </exclude>
                    </excludes>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

配置文件
#连接数据源
spring.datasource.druid.username=root
spring.datasource.druid.password=xgm@2023..
spring.datasource.druid.url=jdbc:mysql://172.16.204.51:3306/redis?serverTimezone=GMT%2B8
spring.datasource.druid.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.druid.initial-size=5

#redis哨兵模式
spring.redis.sentinel.master=mymaster
#哨兵集群
spring.redis.sentinel.nodes=172.16.64.21:6004,172.16.64.21:6005,172.16.64.21:6006
spring.redis.database=0
spring.redis.password=xgm@2023
spring.redis.timeout=3000ms
#默认的lettuce ,lettuce线程安全,Jedis是同步的,不支持异步,Jedis客户端实例不是线程安全的,需要每个线程一个Jedis实例,所以一般通过连接池来使用Jedis.
# 连接池最大连接数(使用负值表示没有限制)
spring.redis.lettuce.pool.max-active=100
# 连接池中的最大空闲连接
spring.redis.lettuce.pool.max-idle=100
# 连接池最大阻塞等待时间(使用负值表示没有限制)
spring.redis.lettuce.pool.max-wait=1000ms
# 连接池中的最小空闲连接
spring.redis.lettuce.pool.min-idle=1
spring.redis.lettuce.shutdown-timeout=1000ms


#日志
logging.pattern.console='%date{yyyy-MM-dd HH:mm:ss.SSS} | %highlight(%5level) [%green(%16.16thread)] %clr(%-50.50logger{49}){cyan} %4line -| %highlight(%msg%n)'
logging.level.root=info
logging.level.io.lettuce.core=debug
logging.level.org.springframework.data.redis=debug
读写分离配置
package com.redis.redissentinel.conf;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import io.lettuce.core.ReadFrom;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.data.redis.RedisProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.RedisPassword;
import org.springframework.data.redis.connection.RedisSentinelConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.HashSet;
import java.util.TimeZone;
import org.springframework.data.redis.core.RedisTemplate;
/**
 * @author ygr
 * @date 2022-02-15 16:30
 */
@Slf4j
@Configuration
public class RedisConfig {

    public ObjectMapper objectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.setTimeZone(TimeZone.getTimeZone("GMT+8"));
        objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
        objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
        objectMapper.setDateFormat(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"));
        objectMapper.configure(JsonParser.Feature.ALLOW_SINGLE_QUOTES, true);
        return objectMapper;
    }

    @Bean
    @ConditionalOnMissingBean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {
        // 创建RedisTemplate<String, Object>对象
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(factory);

        // 定义Jackson2JsonRedisSerializer序列化对象
        Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
        jackson2JsonRedisSerializer.setObjectMapper(objectMapper());

        StringRedisSerializer stringSerial = new StringRedisSerializer();
        // redis key 序列化方式使用stringSerial
        template.setKeySerializer(stringSerial);
        // redis value 序列化方式使用jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        // redis hash key 序列化方式使用stringSerial
        template.setHashKeySerializer(stringSerial);
        // redis hash value 序列化方式使用jackson
        template.setHashValueSerializer(jackson2JsonRedisSerializer);

        template.afterPropertiesSet();
        return template;
    }
    @Bean
    public RedisConnectionFactory lettuceConnectionFactory(RedisProperties redisProperties) {
        RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
                .master(redisProperties.getSentinel().getMaster());
        redisProperties.getSentinel().getNodes().forEach(s -> {
            String[] arr = s.split(":");
            sentinelConfig.sentinel(arr[0],Integer.parseInt(arr[1]));
        });
        LettucePoolingClientConfiguration lettuceClientConfiguration = LettucePoolingClientConfiguration.builder()
                // 读写分离,若主节点能抗住读写并发,则不需要设置,全都走主节点即可
                //ANY 从任何节点读取,NEAREST 从最近节点读取,MASTER_PREFERRED / UPSTREAM_PREFERRED优先读取主节点,如果主节点不可用,则读取从节点,MASTER / UPSTREAM仅读取主节点
                .readFrom(ReadFrom.ANY_REPLICA)
                .build();
        sentinelConfig.setPassword(RedisPassword.of(redisProperties.getPassword()));
        sentinelConfig.setDatabase(redisProperties.getDatabase());
        return new LettuceConnectionFactory(sentinelConfig, lettuceClientConfiguration);
    }

}
测试
package com.redis.redissentinel;

import lombok.extern.slf4j.Slf4j;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.redis.core.RedisTemplate;

import javax.annotation.Resource;
import java.util.concurrent.TimeUnit;

@Slf4j
@SpringBootTest
class RedisSentinelApplicationTests {

    @Resource
    private RedisTemplate<String, Object> redisTemplate;


    @Test
    void witeTest() {
        for (int i = 0; i < 3; i++) {
            try {
                redisTemplate.opsForValue().set("k" + i, "v" + i);
                log.info("set value success: {}", i);

                Object val = redisTemplate.opsForValue().get("k" + i);
                log.info("get value success: {}", val);
                TimeUnit.SECONDS.sleep(1);
            } catch (Exception e) {
                log.error("error: {}", e.getMessage());
            }
        }
        log.info("finished...");
    }

    @Test
    void readTest() {

        Object k1 = redisTemplate.opsForValue().get("k1");
        log.info("读取节点k1的值:{}",k1);

    }

}

image.png

image.png

踩坑指南

三、cluster(重点/推荐)

官网介绍

image.png

架构图

image.png

架构设计原理

image.png

redis集群槽位图

image.png

redis集群优点

slot槽位映射

哈希槽取余(不推荐)
适用场景
优点
缺点
一致性哈希算法分区(不推荐)

image.png

适用场景
优点
缺点
哈希槽分区(16384卡槽,推荐)
适用场景
优点
缺点

image.png
image.png

集群配置(三主三从)

配置文件

image.png

bind 0.0.0.0
daemonize yes
protected-mode no
port 6007
logfile "cluster6007.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6007.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6007
dbfilename dump6007.rdb
appendonly yes
appendfilename "appendonly6007.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6007.conf
cluster-node-timeout 5000

bind 0.0.0.0
daemonize yes
protected-mode no
port 6008
logfile "cluster6008.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6008.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6008
dbfilename dump6008.rdb
appendonly yes
appendfilename "appendonly6008.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6008.conf
cluster-node-timeout 5000

bind 0.0.0.0
daemonize yes
protected-mode no
port 6009
logfile "cluster6009.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6009.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6009
dbfilename dump6009.rdb
appendonly yes
appendfilename "appendonly6009.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6009.conf
cluster-node-timeout 5000

bind 0.0.0.0
daemonize yes
protected-mode no
port 6012
logfile "cluster6012.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6012.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6012
dbfilename dump6012.rdb
appendonly yes
appendfilename "appendonly6012.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6012.conf
cluster-node-timeout 5000

bind 0.0.0.0
daemonize yes
protected-mode no
port 6014
logfile "cluster6014.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6014.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6014
dbfilename dump6014.rdb
appendonly yes
appendfilename "appendonly6014.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6014.conf
cluster-node-timeout 5000

bind 0.0.0.0
daemonize yes
protected-mode no
port 6015
logfile "cluster6012.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6015.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6015
dbfilename dump6015.rdb
appendonly yes
appendfilename "appendonly6015.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6015.conf
cluster-node-timeout 5000

启动

通过redis-cli命令为6台机器构建集群关系

image.png
连接任意一个节点查看集群状态

查看

image.png

测试

image.png

主从容错切换迁移

主从扩容

bind 0.0.0.0
daemonize yes
protected-mode no
port 6016
logfile "cluster6016.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6016.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6016
dbfilename dump6016.rdb
appendonly yes
appendfilename "appendonly6016.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6016.conf
cluster-node-timeout 5000

bind 0.0.0.0
daemonize yes
protected-mode no
port 6017
logfile "cluster6017.log"
pidfile /redis-learn/redis-7.0.9/conf/cluster/cluster6017.pid
dir /redis-learn/redis-7.0.9/conf/cluster/cluster6017
dbfilename dump6017.rdb
appendonly yes
appendfilename "appendonly6017.aof"
requirepass 123456
masterauth 123456
 
cluster-enabled yes
cluster-config-file nodes-6017.conf
cluster-node-timeout 5000

启动,此时这两个实例都是master
将新增6016、6017加入原来集群中

image.png

检查集群情况,6016

image.png

给6016分派卡槽,从其他的服务中均一点

image.png

上述解释

  1. all:集群中的所有主节点都会成为源节点,redis-trib从各个源节点中各取出一部分哈希槽,凑够4096个,然后移动到6016节点上
  2. done :要从特点的哪个节点中取出 4096 个哈希槽
再次检查集群情况

image.png

为主节点6016分配从节点6017 –cluster-master-id 后跟的是6016的id

image.png

主从缩容

让6016、6017下线

image.png

将6016的槽号情况,重新分配,先全部都给6007

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

查看集群情况

image.png

集群删除6016

image.png

完整提供服务配置

image.png

springboot集成集群

springboot接入redis cluster需要和哨兵一样做读写分离吗?

image.png

image.png

pom文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>2.7.17</version>
        <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.redis</groupId>
    <artifactId>redis-cluster</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>redis-cluster</name>
    <description>redis-cluster</description>
    <properties>
        <java.version>1.8</java.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-devtools</artifactId>
            <scope>runtime</scope>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>com.mysql</groupId>
            <artifactId>mysql-connector-j</artifactId>
            <scope>runtime</scope>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <optional>true</optional>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.commons</groupId>
            <artifactId>commons-pool2</artifactId>
        </dependency>

    </dependencies>

    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <configuration>
                    <excludes>
                        <exclude>
                            <groupId>org.projectlombok</groupId>
                            <artifactId>lombok</artifactId>
                        </exclude>
                    </excludes>
                </configuration>
            </plugin>
        </plugins>
    </build>

</project>

配置文件

#连接数据源
spring.datasource.druid.username=root
spring.datasource.druid.password=xgm@2023..
spring.datasource.druid.url=jdbc:mysql://172.16.204.51:3306/redis?serverTimezone=GMT%2B8
spring.datasource.druid.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.druid.initial-size=5

#redis cluster
#支持集群拓扑动态感应刷新,自适应拓扑刷新是否使用所有可用的更新,默认false关闭
spring.redis.lettuce.cluster.refresh.adaptive=true
#定时刷新
spring.redis.lettuce.cluster.refresh.period=2000
#集群信息
spring.redis.cluster.nodes=172.16.64.21:6007,172.16.64.21:6008,172.16.64.21:6009,172.16.64.21:6012,172.16.64.21:6014,172.16.64.21:6015
spring.redis.password=123456
spring.redis.timeout=60
#默认的lettuce ,lettuce线程安全,Jedis是同步的,不支持异步,Jedis客户端实例不是线程安全的,需要每个线程一个Jedis实例,所以一般通过连接池来使用Jedis.
# 连接池最大连接数(使用负值表示没有限制)
spring.redis.lettuce.pool.max-active=50
# 连接池中的最大空闲连接
spring.redis.lettuce.pool.max-idle=50
# 连接池最大阻塞等待时间(使用负值表示没有限制)
spring.redis.lettuce.pool.max-wait=1000
# 连接池中的最小空闲连接
spring.redis.lettuce.pool.min-idle=5
spring.redis.lettuce.shutdown-timeout=1000
#eviction线程调度时间间隔
spring.redis.lettuce.pool.time-between-eviction-runs=2000
#最大的要重定向的次数(由于集群中数据存储在多个节点所以,在访问数据时需要通过节点进行转发)
spring.redis.cluster.max-redirects=3
#最大的连接重试次数
spring.redis.cluster.max-attempts=3

#日志
logging.pattern.console='%date{yyyy-MM-dd HH:mm:ss.SSS} | %highlight(%5level) [%green(%16.16thread)] %clr(%-50.50logger{49}){cyan} %4line -| %highlight(%msg%n)'
logging.level.root=info
logging.level.io.lettuce.core=debug
logging.level.org.springframework.data.redis=debug

配置类

package com.redis.redissentinel.conf;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import io.lettuce.core.ReadFrom;
import io.lettuce.core.TimeoutOptions;
import io.lettuce.core.cluster.ClusterClientOptions;
import io.lettuce.core.cluster.ClusterTopologyRefreshOptions;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.pool2.impl.GenericObjectPoolConfig;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.data.redis.RedisProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.*;
import org.springframework.data.redis.connection.lettuce.LettuceClientConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.serializer.GenericJackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import java.text.SimpleDateFormat;
import java.time.Duration;
import java.util.*;

import org.springframework.data.redis.core.RedisTemplate;

import javax.lang.model.element.NestingKind;

/**
 * @author ygr
 * @date 2022-02-15 16:30
 */
@Slf4j
@Configuration
public class RedisConfig {
    @Value("${spring.redis.lettuce.pool.max-idle}")
    String maxIdle;
    @Value("${spring.redis.lettuce.pool.min-idle}")
    String minIdle;
    @Value("${spring.redis.lettuce.pool.max-active}")
    String maxActive;
    @Value("${spring.redis.lettuce.pool.max-wait}")
    String maxWait;

    @Value("${spring.redis.lettuce.pool.time-between-eviction-runs}")
    String timeBetweenEvictionRunsMillis;
    @Value("${spring.redis.cluster.nodes}")
    String clusterNodes;
    @Value("${spring.redis.password}")
    String password;
    @Value("${spring.redis.cluster.max-redirects}")
    String maxRedirects;
    @Value("${spring.redis.lettuce.cluster.refresh.period}")
    String period;
    @Value("${spring.redis.timeout}")
    String timeout;

    @Bean
    public LettuceConnectionFactory lettuceConnectionFactory() {
        GenericObjectPoolConfig genericObjectPoolConfig = new GenericObjectPoolConfig();
        genericObjectPoolConfig.setMaxIdle(Integer.parseInt(maxIdle));
        genericObjectPoolConfig.setMinIdle(Integer.parseInt(minIdle));
        genericObjectPoolConfig.setMaxTotal(Integer.parseInt(maxActive));
        genericObjectPoolConfig.setMaxWait(Duration.ofMillis(Long.parseLong(maxWait)));
        genericObjectPoolConfig.setTimeBetweenEvictionRuns(Duration.ofMillis(Long.parseLong(timeBetweenEvictionRunsMillis)));
        String[] nodes = clusterNodes.split(",");
        List<RedisNode> listNodes = new ArrayList();
        for (String node : nodes) {
            String[] ipAndPort = node.split(":");
            RedisNode redisNode = new RedisNode(ipAndPort[0], Integer.parseInt(ipAndPort[1]));
            listNodes.add(redisNode);
        }
        RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
        redisClusterConfiguration.setClusterNodes(listNodes);
        redisClusterConfiguration.setPassword(password);
        redisClusterConfiguration.setMaxRedirects(Integer.parseInt(maxRedirects));
        // 配置集群自动刷新拓扑
        ClusterTopologyRefreshOptions topologyRefreshOptions = ClusterTopologyRefreshOptions.builder()
                .enablePeriodicRefresh(Duration.ofSeconds(Long.parseLong(period))) //按照周期刷新拓扑
                .enableAllAdaptiveRefreshTriggers() //根据事件刷新拓扑
                .build();

        ClusterClientOptions clusterClientOptions = ClusterClientOptions.builder()
                //redis命令超时时间,超时后才会使用新的拓扑信息重新建立连接
                .timeoutOptions(TimeoutOptions.enabled(Duration.ofSeconds(Long.parseLong(period))))
                .topologyRefreshOptions(topologyRefreshOptions)
                .build();
        LettuceClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
                .commandTimeout(Duration.ofSeconds(Long.parseLong(timeout)))
                .poolConfig(genericObjectPoolConfig)
                .readFrom(ReadFrom.REPLICA_PREFERRED) // 优先从副本读取
                .clientOptions(clusterClientOptions)
                .build();
        LettuceConnectionFactory factory = new LettuceConnectionFactory(redisClusterConfiguration, clientConfig);
        return factory;

    }



    public ObjectMapper objectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.setTimeZone(TimeZone.getTimeZone("GMT+8"));
        objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
        objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
        objectMapper.setDateFormat(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"));
        objectMapper.configure(JsonParser.Feature.ALLOW_SINGLE_QUOTES, true);
        return objectMapper;
    }

    @Bean
    @ConditionalOnMissingBean
    public RedisTemplate<String, Object> redisTemplate(LettuceConnectionFactory  factory) {
        factory.setShareNativeConnection(false);
        LettuceClientConfiguration clientConfiguration = factory.getClientConfiguration();
        // 创建RedisTemplate<String, Object>对象
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(factory);

        // 定义Jackson2JsonRedisSerializer序列化对象
        Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
        jackson2JsonRedisSerializer.setObjectMapper(objectMapper());

        StringRedisSerializer stringSerial = new StringRedisSerializer();
        // redis key 序列化方式使用stringSerial
        template.setKeySerializer(stringSerial);
        // redis value 序列化方式使用jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        // redis hash key 序列化方式使用stringSerial
        template.setHashKeySerializer(stringSerial);
        // redis hash value 序列化方式使用jackson
        template.setHashValueSerializer(jackson2JsonRedisSerializer);

        template.afterPropertiesSet();
        return template;
    }

}


//---------------------------------推荐下面此方式-------------------------------------------------//



package com.redis.redissentinel.conf;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationFeature;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import io.lettuce.core.ReadFrom;
import lombok.extern.slf4j.Slf4j;
import org.springframework.boot.autoconfigure.condition.ConditionalOnMissingBean;
import org.springframework.boot.autoconfigure.data.redis.RedisProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.RedisPassword;
import org.springframework.data.redis.connection.RedisSentinelConfiguration;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.connection.lettuce.LettucePoolingClientConfiguration;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.HashSet;
import java.util.TimeZone;
import org.springframework.data.redis.core.RedisTemplate;
/**
 * @author ygr
 * @date 2022-02-15 16:30
 */
@Slf4j
@Configuration
public class RedisConfig {

    public ObjectMapper objectMapper() {
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.setTimeZone(TimeZone.getTimeZone("GMT+8"));
        objectMapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
        objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
        objectMapper.setDateFormat(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"));
        objectMapper.configure(JsonParser.Feature.ALLOW_SINGLE_QUOTES, true);
        return objectMapper;
    }

    @Bean
    @ConditionalOnMissingBean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {
        // 创建RedisTemplate<String, Object>对象
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(factory);

        // 定义Jackson2JsonRedisSerializer序列化对象
        Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
        jackson2JsonRedisSerializer.setObjectMapper(objectMapper());

        StringRedisSerializer stringSerial = new StringRedisSerializer();
        // redis key 序列化方式使用stringSerial
        template.setKeySerializer(stringSerial);
        // redis value 序列化方式使用jackson
        template.setValueSerializer(jackson2JsonRedisSerializer);
        // redis hash key 序列化方式使用stringSerial
        template.setHashKeySerializer(stringSerial);
        // redis hash value 序列化方式使用jackson
        template.setHashValueSerializer(jackson2JsonRedisSerializer);

        template.afterPropertiesSet();
        return template;
    }
    @Bean
    public RedisConnectionFactory lettuceConnectionFactory(RedisProperties redisProperties) {
        RedisSentinelConfiguration sentinelConfig = new RedisSentinelConfiguration()
                .master(redisProperties.getSentinel().getMaster());
        redisProperties.getSentinel().getNodes().forEach(s -> {
            String[] arr = s.split(":");
            sentinelConfig.sentinel(arr[0],Integer.parseInt(arr[1]));
        });
        LettucePoolingClientConfiguration lettuceClientConfiguration = LettucePoolingClientConfiguration.builder()
                // 读写分离,若主节点能抗住读写并发,则不需要设置,全都走主节点即可
                //ANY 从任何节点读取,NEAREST 从最近节点读取,MASTER_PREFERRED / UPSTREAM_PREFERRED优先读取主节点,如果主节点不可用,则读取从节点,MASTER / UPSTREAM仅读取主节点
                .readFrom(ReadFrom.ANY_REPLICA)
                .build();
        sentinelConfig.setPassword(RedisPassword.of(redisProperties.getPassword()));
        sentinelConfig.setDatabase(redisProperties.getDatabase());
        return new LettuceConnectionFactory(sentinelConfig, lettuceClientConfiguration);
    }

}

测试

package com.redis.redissentinel;

import lombok.extern.slf4j.Slf4j;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.redis.core.RedisTemplate;

import javax.annotation.Resource;
import java.util.concurrent.TimeUnit;

@Slf4j
@SpringBootTest
class RedisSentinelApplicationTests {

    @Resource
    private RedisTemplate<String, Object> redisTemplate;


    @Test
    void witeTest() {
        for (int i = 0; i < 3; i++) {
            try {
                redisTemplate.opsForValue().set("k" + i, "v" + i);
                log.info("set value success: {}", i);

                Object val = redisTemplate.opsForValue().get("k" + i);
                log.info("get value success: {}", val);
                TimeUnit.SECONDS.sleep(1);
            } catch (Exception e) {
                log.error("error: {}", e.getMessage());
            }
        }
        log.info("finished...");
    }

    @Test
    void readTest() {
        Object k1 = redisTemplate.opsForValue().get("k1");
        log.info("读取节点k1的值:{}",k1);
    }

}

当前集群节点信息
image.png

image.png

举报

相关推荐

0 条评论