创建一个基础的springboot项目,这个我就不多说了,不明白的去搜教程。
pom文件
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid</artifactId>
<version>1.1.6</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.6</version>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>1.1.1</version>
</dependency>
<dependency>
<groupId>net.sf.json-lib</groupId>
<artifactId>json-lib</artifactId>
<version>2.4</version>
<classifier>jdk15</classifier>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.58</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>18.0</version>
</dependency>
</dependencies>
application.properties
spring.profiles.active=test
application-test.yml
server:
port: 8899
mybatis:
mapper-locations: classpath*:mapper/*Mapper.xml
logging:
level:
com.example.demo.mapper: debug
spring:
redis:
database: 0
host: 127.0.0.1
password: 123456
port: 6379
timeout: 5000
jedis:
pool:
max-active: 8
max-idle: 8
max-wait: -1ms
min-idle: 0
lettuce:
pool:
max-active: 8
max-idle: 8
max-wait: -1ms
min-idle: 0
shutdown-timeout: 100ms
datasource:
driverClassName: com.mysql.jdbc.Driver
username: root
password: 123456
url: jdbc:mysql://localhost:3306/yfDemo?useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true
type: com.alibaba.druid.pool.DruidDataSource
配置Druid的监控,可以在地址:http://127.0.0.1:8899/druid/login.html,那么这里我配置的端口是8899,所有地址后面是项目的端口不是8080
import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.support.http.StatViewServlet;
import com.alibaba.druid.support.http.WebStatFilter;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.boot.web.servlet.ServletRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import javax.sql.DataSource;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
@Configuration
public class DruidConfiguration {
@ConfigurationProperties(prefix = "spring.datasource")
@Bean
public DataSource druid(){
return new DruidDataSource();
}
@Bean
public ServletRegistrationBean statViewServlet(){
ServletRegistrationBean bean = new ServletRegistrationBean(new StatViewServlet(), "/druid/*");
Map<String,String> initParams = new HashMap<>();
initParams.put("loginUsername","admin");
initParams.put("loginPassword","123456");
initParams.put("allow","");
bean.setInitParameters(initParams);
return bean;
}
@Bean
public FilterRegistrationBean webStatFilter(){
FilterRegistrationBean bean = new FilterRegistrationBean();
bean.setFilter(new WebStatFilter());
Map<String,String> initParams = new HashMap<>();
initParams.put("exclusions","*.js,*.css,/druid/*");
bean.setInitParameters(initParams);
bean.setUrlPatterns(Arrays.asList("/*"));
return bean;
}
}
redis配置,这里是通过集合的key判断这个数据存储时长,可实现动态设置失效时间。
import com.alibaba.fastjson.support.spring.FastJsonRedisSerializer;
import com.google.common.collect.ImmutableMap;
import org.springframework.cache.CacheManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import java.time.Duration;
import java.util.Map;
@Configuration
public class RedisCacheManage {
@Bean
public RedisTemplate<String, Object> redisTemplate(LettuceConnectionFactory factory) {
RedisTemplate<String, Object> template = new RedisTemplate();
template.setConnectionFactory(factory);
StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
FastJsonRedisSerializer<Object> fastJsonRedisSerializer = new FastJsonRedisSerializer<>(Object.class);
template.setKeySerializer(stringRedisSerializer);
template.setHashKeySerializer(stringRedisSerializer);
template.setValueSerializer(fastJsonRedisSerializer);
template.setHashValueSerializer(fastJsonRedisSerializer);
template.afterPropertiesSet();
template.setEnableTransactionSupport(true);
return template;
}
@Bean
public CacheManager cacheManager(RedisTemplate<String, Object> template) {
RedisCacheConfiguration templateRedisCacheCfg = RedisCacheConfiguration
.defaultCacheConfig()
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(template.getStringSerializer()))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(template.getValueSerializer()))
.disableCachingNullValues();
Map<String, RedisCacheConfiguration> expires = ImmutableMap.<String, RedisCacheConfiguration>builder()
.put("1m", templateRedisCacheCfg.entryTtl(Duration.ofMinutes(1)))
.put("30m", templateRedisCacheCfg.entryTtl(Duration.ofMinutes(30)))
.put("60m", templateRedisCacheCfg.entryTtl(Duration.ofMinutes(60)))
.put("1d", templateRedisCacheCfg.entryTtl(Duration.ofDays(1)))
.put("30d", templateRedisCacheCfg.entryTtl(Duration.ofDays(30)))
.put("common-30d", templateRedisCacheCfg.entryTtl(Duration.ofDays(30)))
.build();
RedisCacheManager redisCacheManager =
RedisCacheManager.RedisCacheManagerBuilder
.fromConnectionFactory(template.getConnectionFactory())
.cacheDefaults(templateRedisCacheCfg.entryTtl(Duration.ofHours(1)))
.transactionAware()
.initialCacheNames(expires.keySet())
.withInitialCacheConfigurations(expires)
.build();
return redisCacheManager;
}
}
这个时候有一个关键的注解,必须要加,不然redis不起作用,MapperScan注解是扫描mapper接口的。
@MapperScan("com.example.demo.mapper")
@EnableCaching
然后我们写一个接口查询的接口,mybatis我就不细讲了,
@RequestMapping("/redis")
@RestController
public class RedisController {
@Autowired
private RedisService redisService;
@GetMapping("/getCompany")
public List<CompanyModel> getCompany(){
return redisService.getCompany();
}
}
@Service
public interface RedisService {
List<CompanyModel> getCompany();
}
@Service
public class RedisServiceImpl implements RedisService {
@Autowired
private CompanyMapper companyMapper;
@Override
@Cacheable(value = "1m",key = "12")
public List<CompanyModel> getCompany() {
return companyMapper.selectCompanyAll();
}
}
@Repository
public interface CompanyMapper {
List<CompanyModel> selectCompanyAll();
}
<select id="selectCompanyAll" resultType="com.example.demo.model.CompanyModel">
SELECT ID id,CORP_NAME name FROM TC_COMPANY_INFO
</select>
通过上面代码我们可以看到service实现层注解@Cacheable(value = “1m”,key = “12”),这个value就是redis配置的map集合的key,那么这个1m设置的失效时间是1分钟。然后我们看一下实际效果

这里我们可以看到里面存入了一个1m::12的,那么我们是不是可以通过改变这个key值实现多个数据的插入设置失效时间呢?我看来很多文章,很多文章写的有一个很大的问题。也是通过这个value去实现动态的失效时间,但是他们的不能设置key值,所以如果存在多个就会导致覆盖。
最后就是redis和数据数据不一致性问题,这个目前都没有解决方案,目前百度查到的方案都存在缺陷,所以呢,我们存redis数据的数据是不轻易更改的数据,这个极大的防止这个问题。