0
点赞
收藏
分享

微信扫一扫

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则


1    管道配置

    在kibana上面通过Dev Tools 定义管道

    举例如下:

PUT  _ingest/pipeline/pipeline-nginx-access
{
"description" : "nginx access log",
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{IP:clientip} - - \\[%{DATA:timestamp}\\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<url>[^\"]+)\" \"(?<browser>[^\"]+)\""]
}
},{
"remove": {
"field": "message"
}
}
]
}

   出现如下界面表示配置成功

 

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elastic

 

    查看和删除定义管道

GET _ingest/pipeline/pipeline-nginx-access

DELETE _ingest/pipeline/pipeline-nginx-access

 2    配置filebeat.yml  

 *   配置nginx日志路径

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elasticsearch_02

 *   配置索引和管道名称及es各项字段

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elastic_03

 *   配置kibana字段

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elastic_04

 配置代码如下

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

# Change to true to enable this input configuration.
enabled: true

# Paths that should be crawled and fetched. Glob based paths.
paths:
- /usr/local/nginx/logs/access.log
fields:
type : "access-log"




output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]

indices:
- index : "filebeat-nginx-access-%{+yyyy.MM.dd}"
when.equals:
fields.type : "access-log"
pipelines:
- pipeline : "pipeline-nginx-access"
when.equals:
fields.type : "access-log"
username : "elastic"
password : "changeme"




setup.kibana:
host: "localhost:5601"
username : "elastic"
password : "elastic"




*  启动filebeat

./filebeat -e  

 

3    在kibana查看结果

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elastic_05

 

错误总结:

     错误1 :

暂未解决


【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elastic_06

PUT  _ingest/pipeline/pipeline-nginx-access
{
"description" : "nginx access log",
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{IP:clientip} - - \\[%{DATA:timestamp}\\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}\" %{NUMBER:http_status_code} %{NUMBER:bytes} \"(?<http_referer>\S+)\" \"(?<http_user_agent>(\S+\s+)*\S+)\"" ]
}
},{
"remove": {
"field": "message"
}
}
]
}

  错误2:

   在启动filebeat出现如下错误

    

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_elastic_07

通过 ./filebeat -e -d "*" 进入debug模式,查看到报错如下:

【ELK】Filebeat使用ingest节点解析Nginx日志并导入elasticsearchlogstash grok 内置正则_nginx_08

 通过报错查看是正则匹配没有匹配到数据,修改定义管道解决该问题。

 

参考资料:

filebeat使用ingest节点解析日志并导入elasticsearch     

​​https://note.yuchaoshui.com/blog/post/yuziyue/filebeat-use-ingest-node-dealwith-log-then-load-into-elasticsearch​​

 

filebeat 配置说明

​​https://www.jianshu.com/p/0a5acf831409​​

 

fieebeat官网

​​https://www.elastic.co/guide/en/beats/filebeat/current/index.html​​

 

举报

相关推荐

0 条评论