elk 環境建置 by docker [sebp/elk]

elk_deployment.md [Github]

The four products are designed for use as an integrated solution, referred to as the “Elastic Stack” (formerly the “ELK stack”)

  • 0.1 Elasticsearch, is developed alongside a data collection.
  • 0.2 Logstash, log-parsing engine.
  • 0.3 Kibana, an analytics and visualisation platform.
  • 0.4 Beats, a collection of lightweight data shippers.

1.pull the elk image

docker pull sebp/elk

2.run and exec container from Image

#2.1 RUN A CONTAINER
docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -p 5000:5000 -it --name elk sebp/elk

#2.2 EXEC : Enter to the container
docker exec -it elk /bin/bash

3.updated the config in container

3.1.Input: /etc/logstash/conf.d/02-beats-input.conf

# before
input {
    beats {
        port => 5044
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
        ssl_key => "/etc/pki/tls/private/logstash-beats.key"
    }
}
# after
input {
    beats {
        port => 5044
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
        ssl_key => "/etc/pki/tls/private/logstash-beats.key"
    }
    tcp{
        port => 5000
    }
    file{
        path => ["/home/forLogstash/*.log"]
        type => "file"
    }
}

3.2 fillter: /etc/logstash/conf.d/10-syslog.conf

# before
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
# after or create a new config file in the folder
# debug online https://grokdebug.herokuapp.com/
filter {
    grok {
        match => ["message", "^\[(?<Date>[^\]]+)\]\s\W\w+\W(?<Client_IP>(\d+.\d+.\d+.\d+))\s,\w+\W(?<Sever_IP>(\d+.\d+.\d+.\d+))\s\W(?<Class>[^\,]+),(?<Method>[^\s]+)\s\W\w+\W(?<Message>[^\O]+)\w+\W(?<Operator>[^\s]+)"]
    }
}

3.3 output: /etc/logstash/conf.d/output.conf

output {
  elasticsearch {
    hosts => "elasticsearch:9200"
    index => "logstash-%{+yyyy.MM.dd}"
  }
}

4. send a log file to the logstash process of container from host

#new a sample logfile

[2017-05-10 19:52:42.863] ,ClientIP:123.456.789.11 ,SeverIP:124.362.251.52 ,ExampleApi ,getGame ,message:['message':'Hello World', 'data': [2, 5, 7]] , Operator:Neil

4.1 setting filebeat on Mac (official doc)

#Download and Install official doc

brew tap elastic/tap 
brew install elastic/tap/filebeat-full

4.2 Updated config: /usr/local/etc/filebeat/filebeat.yml

# before
        .
        .
        .
#=========================== Filebeat inputs =============================
filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
        .
        .
        .
# after
        .
        .
        .
#=========================== Filebeat inputs =============================
filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    - /usr/local/etc/filebeat/logfile/*.log
    #- c:\programdata\elasticsearch\logs\*
        .
        .
        .

4.2 Copy a file

 docker cp ../<logfilename>.log <docker container ID>:/home/forLogstash/

4.3 Send the log file By TCP

nc 127.0.0.1 5000 < <logfilename>.log 
#Example: nc 127.0.0.1 5000 < system.log 

PLUS: Auto delete the older log files. [Curator]

1.install curator

pip3 install curator

2.edit config

#curator.sh

/<path>/<to>/.local/bin/curator
ex. /home/<username>/.local/bin/curator
#curator.yml
http_auth: "<user>:<password>"
logfile: <path>/<to>/log/curator.log

3.set crontab

#3.1 edit

crontab -e

0 3 * * * /bin/bash /var/service-curator-elk/curator.sh

#3.2 confirm

crontab -l

Add a Comment

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *