0
点赞
收藏
分享

微信扫一扫

Apache Kafka-生产者_批量发送消息的核心参数及功能实现



文章目录


  • ​​概述​​
  • ​​参数设置​​
  • ​​Code​​

  • ​​POM依赖​​
  • ​​配置文件​​
  • ​​生产者​​
  • ​​消费者​​
  • ​​单元测试​​
  • ​​测试结果​​

  • ​​源码地址​​


Apache Kafka-生产者_批量发送消息的核心参数及功能实现_kafka

概述

kafka中有个 micro batch 的概念 ,为了提高Producer 发送的性能。

不同于RocketMQ 提供了一个可以批量发送多条消息的 API 。 Kafka 的做法是:提供了一个 ​RecordAccumulator 消息收集器​,将发送给相同 Topic 的相同 Partition 分区的消息们,缓冲一下,当满足条件时候,一次性批量将缓冲的消息提交给 Kafka Broker 。

参数设置

​​https://kafka.apache.org/24/documentation.html#producerconfigs​​

主要涉及的参数 ,三个条件,满足任一即会批量发送:

  • batch-size :超过收集的消息数量的最大量。默认16KB

Apache Kafka-生产者_批量发送消息的核心参数及功能实现_java_02

  • buffer-memory :超过收集的消息占用的最大内存 , 默认32M

Apache Kafka-生产者_批量发送消息的核心参数及功能实现_java_03

  • linger.ms :超过收集的时间的最大等待时长,单位:毫秒。

Apache Kafka-生产者_批量发送消息的核心参数及功能实现_spring_04

Code

POM依赖

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<!-- 引入 Spring-Kafka 依赖 -->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<scope>test</scope>
</dependency>
</dependencies>

配置文件

spring:
# Kafka 配置项,对应 KafkaProperties 配置类
kafka:
bootstrap-servers: 192.168.126.140:9092 # 指定 Kafka Broker 地址,可以设置多个,以逗号分隔
# Kafka Producer 配置项
producer:
acks: 1 # 0-不应答。1-leader 应答。all-所有 leader 和 follower 应答。
retries: 3 # 发送失败时,重试发送的次数
key-serializer: org.apache.kafka.common.serialization.StringSerializer # 消息的 key 的序列化
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer # 消息的 value 的序列化
batch-size: 16384 # 每次批量发送消息的最大数量 单位 字节 默认 16K
buffer-memory: 33554432 # 每次批量发送消息的最大内存 单位 字节 默认 32M
properties:
linger:
ms: 10000 # 批处理延迟时间上限。[实际不会配这么长,这里用于测速]这里配置为 10 * 1000 ms 过后,不管是否消息数量是否到达 batch-size 或者消息大小到达 buffer-memory 后,都直接发送一次请求。
# Kafka Consumer 配置项
consumer:
auto-offset-reset: earliest # 设置消费者分组最初的消费进度为 earliest
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring:
json:
trusted:
packages: com.artisan.springkafka.domain
# Kafka Consumer Listener 监听器配置
listener:
missing-topics-fatal: false # 消费监听接口监听的主题不存在时,默认会报错。所以通过设置为 false ,解决报错

logging:
level:
org:
springframework:
kafka: ERROR # spring-kafka
apache:
kafka: ERROR # kafka

Apache Kafka-生产者_批量发送消息的核心参数及功能实现_spring_05

生产者

package com.artisan.springkafka.producer;

import com.artisan.springkafka.constants.TOPIC;
import com.artisan.springkafka.domain.MessageMock;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Component;
import org.springframework.util.concurrent.ListenableFuture;

import java.util.Random;
import java.util.concurrent.ExecutionException;

/**
* @author 小工匠
* @version 1.0
* @description: TODO
* @date 2021/2/17 22:25
* @mark: show me the code , change the world
*/

@Component
public class ArtisanProducerMock {


@Autowired
private KafkaTemplate<Object,Object> kafkaTemplate ;


/**
* 同步发送
* @return
* @throws ExecutionException
* @throws InterruptedException
*/
public SendResult sendMsgSync() throws ExecutionException, InterruptedException {
// 模拟发送的消息
Integer id = new Random().nextInt(100);
MessageMock messageMock = new MessageMock(id,"artisanTestMessage-" + id);
// 同步等待
return kafkaTemplate.send(TOPIC.TOPIC, messageMock).get();
}



public ListenableFuture<SendResult<Object, Object>> sendMsgASync() throws ExecutionException, InterruptedException {
// 模拟发送的消息
Integer id = new Random().nextInt(100);
MessageMock messageMock = new MessageMock(id,"messageSendByAsync-" + id);
// 异步发送消息
ListenableFuture<SendResult<Object, Object>> result = kafkaTemplate.send(TOPIC.TOPIC, messageMock);
return result ;

}

}

消费者

package com.artisan.springkafka.consumer;

import com.artisan.springkafka.domain.MessageMock;
import com.artisan.springkafka.constants.TOPIC;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

/**
* @author 小工匠
* @version 1.0
* @description: TODO
* @date 2021/2/17 22:33
* @mark: show me the code , change the world
*/

@Component
public class ArtisanCosumerMock {


private Logger logger = LoggerFactory.getLogger(getClass());
private static final String CONSUMER_GROUP_PREFIX = "MOCK-A" ;

@KafkaListener(topics = TOPIC.TOPIC ,groupId = CONSUMER_GROUP_PREFIX + TOPIC.TOPIC)
public void onMessage(MessageMock messageMock){
logger.info("【接受到消息][线程:{} 消息内容:{}]", Thread.currentThread().getName(), messageMock);
}

}
package com.artisan.springkafka.consumer;

import com.artisan.springkafka.domain.MessageMock;
import com.artisan.springkafka.constants.TOPIC;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

/**
* @author 小工匠
* @version 1.0
* @description: TODO
* @date 2021/2/17 22:33
* @mark: show me the code , change the world
*/

@Component
public class ArtisanCosumerMockDiffConsumeGroup {


private Logger logger = LoggerFactory.getLogger(getClass());

private static final String CONSUMER_GROUP_PREFIX = "MOCK-B" ;

@KafkaListener(topics = TOPIC.TOPIC ,groupId = CONSUMER_GROUP_PREFIX + TOPIC.TOPIC)
public void onMessage(MessageMock messageMock){
logger.info("【接受到消息][线程:{} 消息内容:{}]", Thread.currentThread().getName(), messageMock);
}

}

单元测试

package com.artisan.springkafka.produceTest;

import com.artisan.springkafka.SpringkafkaApplication;
import com.artisan.springkafka.producer.ArtisanProducerMock;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.kafka.support.SendResult;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;

import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;

/**
* @author 小工匠
* * @version 1.0
* @description: TODO
* @date 2021/2/17 22:40
* @mark: show me the code , change the world
*/

@RunWith(SpringRunner.class)
@SpringBootTest(classes = SpringkafkaApplication.class)
public class ProduceMockTest {

private Logger logger = LoggerFactory.getLogger(getClass());


@Autowired
private ArtisanProducerMock artisanProducerMock;



@Test
public void testAsynSend() throws ExecutionException, InterruptedException {
logger.info("开始发送");

for (int i = 0; i < 2; i++) {
artisanProducerMock.sendMsgASync().addCallback(new ListenableFutureCallback<SendResult<Object, Object>>() {
@Override
public void onFailure(Throwable throwable) {
logger.info(" 发送异常{}]]", throwable);

}
@Override
public void onSuccess(SendResult<Object, Object> objectObjectSendResult) {
logger.info("回调结果 Result = topic:[{}] , partition:[{}], offset:[{}]",
objectObjectSendResult.getRecordMetadata().topic(),
objectObjectSendResult.getRecordMetadata().partition(),
objectObjectSendResult.getRecordMetadata().offset());
}
});
// 发送2次 每次间隔5秒, 凑够我们配置的 linger: ms: 10000
TimeUnit.SECONDS.sleep(5);
}

// 阻塞等待,保证消费
new CountDownLatch(1).await();

}

}

异步发送2条消息,每次发送消息之间, sleep 5 秒,以便达到配置的 linger.ms 最大等待时长10秒。

测试结果

2021-02-18 10:58:53.360  INFO 24736 --- [           main] c.a.s.produceTest.ProduceMockTest        : 开始发送
2021-02-18 10:59:03.555 INFO 24736 --- [ad | producer-1] c.a.s.produceTest.ProduceMockTest : 回调结果 Result = topic:[MOCK_TOPIC] , partition:[0], offset:[30]
2021-02-18 10:59:03.556 INFO 24736 --- [ad | producer-1] c.a.s.produceTest.ProduceMockTest : 回调结果 Result = topic:[MOCK_TOPIC] , partition:[0], offset:[31]
2021-02-18 10:59:03.595 INFO 24736 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][线程:org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 消息内容:MessageMock{id=6, name='messageSendByAsync-6'}]
2021-02-18 10:59:03.595 INFO 24736 --- [ntainer#1-0-C-1] a.s.c.ArtisanCosumerMockDiffConsumeGroup : 【接受到消息][线程:org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1 消息内容:MessageMock{id=6, name='messageSendByAsync-6'}]
2021-02-18 10:59:03.595 INFO 24736 --- [ntainer#1-0-C-1] a.s.c.ArtisanCosumerMockDiffConsumeGroup : 【接受到消息][线程:org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1 消息内容:MessageMock{id=94, name='messageSendByAsync-94'}]
2021-02-18 10:59:03.595 INFO 24736 --- [ntainer#0-0-C-1] c.a.s.consumer.ArtisanCosumerMock : 【接受到消息][线程:org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1 消息内容:MessageMock{id=94, name='messageSendByAsync-94'}]

Apache Kafka-生产者_批量发送消息的核心参数及功能实现_spring_06

10 秒后,满足批量消息的最大等待时长,所以 2 条消息被 Producer 批量发送。同时我们配置的是 acks=1 ,需要等待发送成功后,才会回调 ListenableFutureCallback 的方法。

当然了,我们这里都是为了测试,设置的这么长的间隔,实际中需要根据具体的业务场景设置一个合理的值。

源码地址

​​https://github.com/yangshangwei/boot2/tree/master/springkafkaBatchSend​​



举报

相关推荐

0 条评论