文章目录
- 前期准备
- 开始安装
- 安装elastic search
- 安装logstash
- 安装kibana
- 配置ELK
- 配置ElasticSearch
- 配置logstash
- 配置kibana
- 启动ELK
- 启动命令
- 启动测试
- 设置ELK策略
- 创建ILM策略
- 将ILM策略与日志index关联
- 查看索引是否被ILM策略管理
前期准备
ELK包含三部分软件
ElasticSearch用作搜索引擎
Logstash用作日志收集,也可以是其他的日志搜集器比如filebeat
Kibana用作日志管理界面
资源包准备:
可以去官方下载elk的rpm安装包
https://www.elastic.co/guide/en/elasticsearch/reference/current/rpm.html
或者从阿里的rpm库找一下
https://developer.aliyun.com/packageSearch?word=haproxy
开始安装
安装elastic search
sudo rpm -ivh elasticsearch-8.15.2-x86_64.rpm
安装logstash
sudo rpm -ivh logstash-8.15.2-x86_64.rpm
安装kibana
sudo rpm -ivh kibana-8.15.2-x86_64.rpm
配置ELK
配置ElasticSearch
留意端口和host,还有是否为集群部署
使用编辑器打开elasticsearch的配置文件一般是
vi /etc/elasticsearch/elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
node.roles: ["master", "data"]
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.system_call_filter: true
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
discovery.type: single-node# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 11-06-2024 09:36:34
#
# --------------------------------------------------------------------------------# Enable security features
xpack.security.enabled: falsexpack.security.enrollment.enabled: false# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:enabled: falsekeystore.path: certs/http.p12# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:enabled: trueverification_mode: certificatekeystore.path: certs/transport.p12truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
#cluster.initial_master_nodes: ["node-1"]#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
配置logstash
创建logstash日志存储目录
如果不希望logstash再单独保存日志文件,不需要以下步骤,并且config里面的output.file也可以删掉,防止日志太大
sudo mkdir -p /elk/logs/logstash/
sudo chown -R logstash:logstash /elk/logs/logstash/
sudo chmod -R 755 /elk/logs/logstash/
打开logstash配置文件
vi /etc/logstash/conf.d/logstash.conf
input {http {host => "0.0.0.0" # 监听所有可用接口port => 5441 # 监听端口additional_codecs => {"application/json" => "json"}type => "http_json"}http {host => "0.0.0.0" # 监听所有可用接口port => 5442 # 监听端口additional_codecs => {"application/json" => "json"}type => "http5442_json"}udp {port => 5440codec => plaintype => "udp_source"}
}
filter {date {match => ["event.original", "YYYY-MM-dd HH:mm:ss.SSSS"]timezone => "Asia/Shanghai"target => "@timestamp"}
}output {if [type] == "http_json" {elasticsearch {hosts => ["http://localhost:9200"]index => "http-log-%{+YYYY.MM.dd}" #elastic的索引}}else if [type] == "udp_source" {elasticsearch {hosts => ["http://localhost:9200"]index => "udp-log-%{+YYYY.MM.dd}"}}else if [type] == "http5442_json" {elasticsearch {hosts => ["http://localhost:9200"]index => "http5442-log-%{+YYYY.MM.dd}"}}stdout {codec => rubydebug} # 将日志输出到控制台,用于调试
}
配置kibana
vi /etc/kibana/kibana.yml
# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://localhost:9200"]# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "pass"# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug# Enables you to specify a file where Kibana stores log output.
logging:appenders:file:type: filefileName: /var/log/kibana/kibana.loglayout:type: jsonroot:appenders:- default- file
# policy:
# type: size-limit
# size: 256mb
# strategy:
# type: numeric
# max: 10
# layout:
# type: json# Logs queries sent to Elasticsearch.
#logging.loggers:
# - name: elasticsearch.query
# level: debug# Logs http responses.
#logging.loggers:
# - name: http.server.response
# level: debug# Logs system usage information.
#logging.loggers:
# - name: metrics.ops
# level: debug# Enables debug logging on the browser (dev console)
#logging.browser.root:
# level: debug# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"# =================== Frequently used (Optional)===================# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000
启动ELK
启动命令
sudo systemctl start elasticsearch.service
# 设置自启动:
sudo systemctl enable elasticsearch.servicesudo systemctl start logstash
sudo systemctl enable logstashsudo systemctl start kibana
sudo systemctl enable kibana
启动测试
设置ELK策略
如果日志文件太大,可以使用ELK的ILM策略,比如自动清除3天以上的索引
创建ILM策略
curl -X PUT "http://localhost:9200/_ilm/policy/delete_after_3_days" -H 'Content-Type: application/json' -d '
{"policy": {"phases": {"delete": {"min_age": "3d","actions": {"delete": {}}}}}
}'
将ILM策略与日志index关联
curl -X PUT "http://localhost:9200/_template/my_template" -H 'Content-Type: application/json' -d '
{"index_patterns": ["ohtc-log*", "oht-log*", "gso-log*"],"settings": {"index.lifecycle.name": "delete_after_3_days"},"mappings": {"properties": {"@timestamp": {"type": "date"}}}
}'
查看索引是否被ILM策略管理
curl -X GET "http://localhost:9200/oht-log*/_ilm/explain"
curl -X GET "http://localhost:9200/ohtc-log*/_ilm/explain"