0
点赞
收藏
分享

微信扫一扫

Spring Cloud学习day108:ELK

天际孤狼 2021-09-26 阅读 229

一、ELK介绍

1.ELK解决了什么问题?

  • ELK的介绍:


  • ELK的架构原理:


二、安装ELK

1.安装ELasticSearch:

  • 安装准备:
  • Linux内核升级步骤:
rpm --import http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-7
yum update nss

(2)执行:

rpm -Uvh http://www.elrepo.org/elrepo-release-6-8.el6.elrepo.noarch.rpm

(3)执行:

yum --enablerepo=elrepo-kernel install kernel-lt -y

(4)修改grub.conf文件,确保使用新内核启动:

vim /etc/grub.conf
  • 安装elasticsearch:

  • 修改elasticsearch需要的系统配置:
    (1)修改文件的命令:
vi /etc/security/limits.conf

(2)增加内容:

* soft nofile 65536
* hard nofile 65536
  • 修改线程池最低容量:
    (1)修改文件:
vi /etc/security/limits.d/90-nproc.conf

(2)修改内容:

*          soft    nproc     4096
root       soft    nproc     unlimited

(3)新增下述内容:

vm.max_map_count=655360
  • 设置可访问的客户端:
vi config/elasticsearch.yml

(2)修改内容:

etwork.host: 0.0.0.0
http.port: 9200

  • 创建用户:

(1)创建用户组:

groupadd elk

(2)创建用户:

useradd admin
passwd admin

(3)将admin用户添加到elk组:

usermod -G elk admin

(4)设置sudo权限:

# Allow root to run any commands anywhere
root    ALL=(ALL)       ALL
#找到后添加如下内容
admin      ALL=(ALL)       ALL

(5)为用户分配权限:

chown -R admin:elk /usr/local/elasticsearch
  • 切换用户:
su admin

  • ElasticSearch的启动和停止:
    (1)启动:
./elasticsearch -d

(2)验证是否启动成功:

#Linux服务器中测试
curl http://192.168.70.140:9200
#在浏览器中访问
http://192.168.70.140:9200/


二、安装Head插件

1.Head插件简介:

2.安装步骤:

  • 安装NodeJS:
curl -sL https://rpm.nodesource.com/setup_8.x | bash -
yum install -y nodejs


  • 安装npm:
npm install -g cnpm --registry=https://registry.npm.taobao.org

  • 使用npm安装grunt:
npm install -g grunt
npm install -g grunt-cli --registry=https://registry.npm.taobao.org --no-proxy


  • 查看以上版本:
node -v
npm -v
grunt -version

  • 下载head插件源码:
#下载
wget https://github.com/mobz/elasticsearch-head/archive/master.zip
#解压
unzip master.zip 


  • 使用国内镜像安装:
#进入目录
cd elasticsearch-head-master
#下载
sudo npm install -g cnpm --registry=https://registry.npm.taobao.org
sudo cnpm install


3.配置:

  • 配置ElasticSearch,使得HTTP对外提供服务:
# 增加新的参数,这样head插件可以访问es。设置参数的时候:后面要有空格
        http.cors.enabled: true
        http.cors.allow-origin: "*"


  • 修改Head插件配置文件:
        connect: {
            server: {
                    options: {
                            hostname: '0.0.0.0',
                            port: 9100,
                            base: '.',
                            keepalive: true
                    }
            }
    }

  • 启动:
    (1)重启:
./elasticsearch -d

(2)启动head:

elasticsearch-head-master目录下:
        grunt server
        或  npm run start


  • 简单应用:
    (1)创建索引:
curl -XPUT http://192.168.70.140:9200/applog

(2)查看head变化:

http://192.168.226.132:9100/


三、安装Logstash

1.下载解压:

(1)下载:

wget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.3.tar.gz

(2)解压:

tar zxvf logstash-6.2.3.tar.gz

(3)复制:


2.测试:

./bin/logstash -e 'input { stdin { } } output { stdout {} }'

3.修改配置:

# For detail structure of this file  
    # Set: https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html  
    input {  
      # For detail config for log4j as input,   
      # See: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-log4j.html  
      tcp {  
        mode => "server"  
        host => "192.168.226.132"  
        port => 9250  
      }  
    }
    filter {  
      #Only matched data are send to output.  
    }  
    output {  
      # For detail config for elasticsearch as output,   
      # See: https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html  
      elasticsearch {  
        action => "index"          #The operation on ES  
        hosts  => "192.168.226.132:9200"   #ElasticSearch host, can be array.  
        index  => "applog"         #The index to write data to.  
      }  
    }  
  • 启动:
./bin/logstash -f config/log_to_es.conf 
或 后台运行守护进程 ./bin/logstash -f config/log_to_es.conf &
  • 测试:

四、安装Kibana

1.下载解压:

(1)下载:

wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.3-linux-x86_64.tar.gz

(2)解压:

tar zxvf kibana-6.2.3-linux-x86_64.tar.gz

(3)复制:


2.修改配置:

    server.port: 5601
    server.host: "0.0.0.0"
    elasticsearch.url: http://192.168.70.140:9200
    kibana.index: ".kibana"

  • 启动:
./bin/kibana 
  • 测试:

3.Kibana操作界面:

  • Discover:
  • Visualize :
  • Dashboard:
  • Timelion:
  • DevTools:
  • Management:

五、Spring Cloud和ELK的集成

1.创建Sleuth-Product-Service-elk:

2.创建Sleuth-Product-Provider-elk:

  • 修改POM文件:
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework.cloud</groupId>
                <artifactId>spring-cloud-dependencies</artifactId>
                <version>Dalston.SR5</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
        </dependencies>
    </dependencyManagement>

    <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-eureka</artifactId>
        </dependency>
        <dependency>
            <groupId>org.mybatis.spring.boot</groupId>
            <artifactId>mybatis-spring-boot-starter</artifactId>
            <version>1.3.0</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
        </dependency>
        <!-- 添加 product-service 坐标 -->
        <dependency>
            <groupId>com.zlw</groupId>
            <artifactId>springcloud-sleuth-product-service-elk</artifactId>
            <version>0.0.1-SNAPSHOT</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-sleuth</artifactId>
        </dependency>
        <dependency>
            <groupId>net.logstash.logback</groupId>
            <artifactId>logstash-logback-encoder</artifactId>
            <version>5.0</version>
        </dependency>
  • 修改配置文件:
spring.application.name=sleuth-product-provider-elk
server.port=9001

#设置服务注册中心
eureka.client.serviceUrl.defaultZone=http://admin:123456@eureka1:8761/eureka/,http://admin:123456@eureka2:8761/eureka/
#----mysql-db-------
mybatis.type-aliases-package=com.book.product.pojo
mybatis.mapper-locations==classpath:com/book/product/mapper/*.xml

spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/book-product?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
spring.datasource.username=root
spring.datasource.password=root
  • 添加logback.xml文件:
<?xml version="1.0" encoding="UTF-8"?>
<!--该日志将日志级别不同的log信息保存到不同的文件中 -->
<configuration>
    <include resource="org/springframework/boot/logging/logback/defaults.xml" />

    <springProperty scope="context" name="springAppName"
        source="spring.application.name" />

    <!-- 日志在工程中的输出位置 -->
    <property name="LOG_FILE" value="${BUILD_FOLDER:-build}/${springAppName}" />

    <!-- 控制台的日志输出样式 -->
    <property name="CONSOLE_LOG_PATTERN"
        value="%clr(%d{yyyy-MM-dd HH:mm:ss.SSS}){faint} %clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint} %clr([%15.15t]){faint} %m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}}" />

    <!-- 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>INFO</level>
        </filter>
        <!-- 日志输出编码 -->
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <!-- 为logstash输出的JSON格式的Appender -->
    <appender name="logstash"
        class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.226.132:9250</destination>
        <!-- 日志输出编码 -->
        <encoder
            class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    <pattern>
                        {
                        "severity": "%level",
                        "service": "${springAppName:-}",
                        "trace": "%X{X-B3-TraceId:-}",
                        "span": "%X{X-B3-SpanId:-}",
                        "exportable": "%X{X-Span-Export:-}",
                        "pid": "${PID:-}",
                        "thread": "%thread",
                        "class": "%logger{40}",
                        "rest": "%message"
                        }
                    </pattern>
                </pattern>
            </providers>
        </encoder>
    </appender>

    <!-- 日志输出级别 -->
    <root level="DEBUG">
        <appender-ref ref="console" />
        <appender-ref ref="logstash" />
    </root>
</configuration>

创建Consumer:

  • 测试:



举报

相关推荐

0 条评论