0
点赞
收藏
分享

微信扫一扫

kafka集成spark

笙烛 2022-10-16 阅读 150


scala环境准备

下载scala-2.12.11,解压到安装目录,配置环境变量

kafka集成spark_kafka


kafka集成spark_apache_02

测试Scala是否安装成功

kafka集成spark_apache_03

创建项目

给IDEA添加scala插件

kafka集成spark_kafka_04

新建maven项目spark-kafka,在项目 spark-kafka 上点击右键,Add Framework Support=》勾选 scala

kafka集成spark_apache_05


kafka集成spark_apache_06


kafka集成spark_scala_07

在 main 下创建 scala 文件夹,并右键 Mark Directory as,选择Sources Root

kafka集成spark_scala_08

log4j.properties

log4j.rootLogger=error, stdout,R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%5L) : %m%n

log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=../log/agent.log log4j.appender.R.MaxFileSize=1024KB
log4j.appender.R.MaxBackupIndex=1

log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss,SSS} %5p --- [%50t] %-80c(line:%6L) : %m%n

pom.xml

        <dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.12</artifactId>
<version>3.0.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.12</artifactId>
<version>3.0.0</version>
</dependency>

spark生产者

kafka集成spark_apache_09


kafka集成spark_scala_10

package com.chen.spark

import java.util.Properties
import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord}
import org.apache.kafka.common.serialization.StringSerializer

//scala代码
object SparkKafkaProducer {
def main(args: Array[String]): Unit = {
//0,配置信息
val properties = new Properties();
properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"hadoop100:9092,hadoop101:9092,hadoop102:9092")
properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,classOf[StringSerializer])
properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,classOf[StringSerializer])

//1,创建生产者
val producer = new KafkaProducer[String, String](properties)

//2,发送数据
for (i <- 1 to 5) {
producer.send(new ProducerRecord[String,String]("chen","scala"+i))
}

//3,关闭资源
producer.close()
}
}

启动zk,再启kafka,再启动 Kafka 消费者
bin/kafka-console-consumer.sh -bootstrap-server hadoop100:9092 --topic chen

执行SparkKafkaProducer,服务器收到如下

kafka集成spark_kafka_11

spark消费者

package com.chen.spark

import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies}
import org.apache.spark.streaming.{Seconds, StreamingContext}

object SparkKafkaConsumer {
def main(args: Array[String]): Unit = {
//1,初始化上下文环境
val conf = new SparkConf().setMaster("local[*]").setAppName("spark-kafka")

val ssc = new StreamingContext(conf, Seconds(3))

//消费数据
val kafapara = Map[String, Object](
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG -> "hadoop100:9092,hadoop101:9092,hadoop102:9092",
ConsumerConfig.GROUP_ID_CONFIG -> "chenGroup",
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer],
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG -> classOf[StringDeserializer]
)
val kafkaDStream = KafkaUtils.createDirectStream(ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("chen"), kafapara))

//5.将每条消息的 KV 取出
val valueDStream = kafkaDStream.map(record => record.value())

//6.计算 WordCount
valueDStream.print()

//3,执行代码并阻塞
ssc.start()
ssc.awaitTermination()
}
}

执行SparkKafkaConsumer,再启动zk,再启kafka生产者
bin/kafka-console-producer.sh --bootstrap-server hadoop100:9092 -topic chen

发送消息

IDEA控制台收到如下

kafka集成spark_scala_12


举报

相关推荐

0 条评论