1. 程式人生 > 其它 >手把手教你搭建實時日誌分析平臺

手把手教你搭建實時日誌分析平臺

背景

基於ELK搭建一個實時日誌分析平臺

架構

下載

#下載
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.13.1-linux-x86_64.tar.gz
wget https://downloads.apache.org/kafka/2.8.0/kafka_2.12-2.8.0.tgz
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.13.2-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.13.2-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.13.2-linux-x86_64.tar.gz
#解壓
ls *.tar.gz | xargs -n1 tar xzvf
#將filebeat的使用者許可權改為root
sudo chown -hR root /home/mikey/Downloads/ELK/filebeat-7.13.1-linux-x86_64

安裝

Kafka

nohup ./bin/zookeeper-server-start.sh config/zookeeper.properties &
nohup ./bin/kafka-server-start.sh config/server.properties &

Elasticsearch

./bin/elasticsearch -d

kibana

./bin/kibana &

Filebeat

1.檢視可用的收集模型

./filebeat modules list

2.開啟需要收集的模型

./filebeat modules enable system nginx mysql

3.設定日誌檔案路徑,編輯filebeat.yml配置檔案


#配置輸出到kafka
output.kafka:
  # initial brokers for reading cluster metadata
  hosts: ["kafka1:9092", "kafka2:9092", "kafka3:9092"]

  # message topic selection + partitioning
  topic: collect_log_topic
  partition.round_robin:
    reachable_only: false
  required_acks: 1
  compression: gzip
  max_message_bytes: 1000000

4.授權啟動

sudo chown root filebeat.yml 
sudo chown root modules.d/system.yml 
sudo ./filebeat -e

5.新增大盤

./filebeat setup --dashboards

logstash

1.配置檔案

input {
    kafka {
        type => "ad"
        bootstrap_servers => "127.0.0.1:9092,114.118.13.66:9093,114.118.13.66:9094"
        client_id => "es_ad"
        group_id => "es_ad"
        auto_offset_reset => "latest" # 從最新的偏移量開始消費
        consumer_threads => 5
        decorate_events => true # 此屬性會將當前topic、offset、group、partition等資訊也帶到message中
        topics => ["collect_log_topic"] # 陣列型別,可配置多個topic
        tags => ["nginx",]
    }
}
output {
        elasticsearch {
            hosts => ["114.118.10.253:9200"]
            index => "log-%{+YYYY-MM-dd}"
            document_type => "access_log"
            timeout => 300
        }
}

2.建立目錄

mkdir logs_data_dir

3.啟動logstash

nohup bin/logstash -f config/kafka-logstash-es.conf --path.data=./logs_data_dir 1>/dev/null 2>&1 &

效果

資料

相關博文: 一篇文章搞懂filebeat(ELK)

Filebeat官方文件: Filebeat Reference

filebeat輸出到kafka: https://www.elastic.co/guide/en/beats/filebeat/current/kafka-output.html

掃一掃,關注我