【ELK】window下ELK的安装与部署

news/2024/9/26 3:21:13/

ELK的安装与部署

  • 1. 下载
  • 2. 配置&启动
    • 2.1 elasticsarch
      • 2.1.1 生成证书
      • 2.1.2 生成秘钥
      • 2.1.3 将凭证迁移到指定目录
      • 2.1.4 改配置
      • 2.1.5 启动
      • 2.1.6 访问测试
      • 2.1.7 生成kibana账号
    • 2.2 kibana
      • 2.2.1 改配置
      • 2.2.2 启动
      • 2.2.3 访问测试
    • 2.3 logstash
      • 2.3.1 改配置
      • 2.3.2 启动
    • 2.4 filebeat
      • 2.4.1 改配置
      • 2.4.2 启动

1. 下载

官网地址:https://www.elastic.co/

下载地址:Download Elasticsearch | Elastic

其他版本或产品: Past Releases of Elastic Stack Software | Elastic

IK地址:https://github.com/infinilabs/analysis-ik/releases

需下载内容:

在这里插入图片描述

所有产品版本需保持一致

2. 配置&启动

2.1 elasticsarch

2.1.1 生成证书

切换到bin目录下,执行

elasticsearch-certutil.bat ca

碰到第一个直接回车 文件将会生成在当前文件夹下
碰到第二个输入密码,例如123456
在这里插入图片描述
完成后会生成一个文件:elastic-stack-ca.p12

在这里插入图片描述

2.1.2 生成秘钥

elasticsearch-certutil.bat cert --ca ./elastic-stack-ca.p12

第一个输入密码后回车
第二个不需要输入直接回车
第三个确认密码后回车,之后生成一个文件:elastic-certificates.p12

在这里插入图片描述

2.1.3 将凭证迁移到指定目录

将生成的elastic-stack-ca.p12elastic-certificates.p12放置在E:\software\elasticsearch-8.9.2\config\certificates目录下

在这里插入图片描述

2.1.4 改配置

修改config/elasticsearch.yml

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-elatics
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: E:\ELK\elasticsearch-8.9.2\data
#
# Path to log files:
#
path.logs: E:\ELK\elasticsearch-8.9.2\logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 0.0.0.0
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 22-04-2024 01:19:26
#
# --------------------------------------------------------------------------------# Enable security featuresxpack.security.enabled: truexpack.security.enrollment.enabled: true#xpack.security.http.ssl:
#  enabled: true
#  keystore.path: certs/http.p12xpack.security.transport.ssl:enabled: trueverification_mode: certificatekeystore.path: E:\\software\\elasticsearch-8.9.2\\config\\certificates\\elastic-certificates.p12keystore.password: 123456truststore.path: E:\\software\\elasticsearch-8.9.2\\config\\certificates\\elastic-certificates.p12truststore.password: 123456

2.1.5 启动

在这里插入图片描述
双击执行即可,初次执行会返回密码及token等信息 需记录下来 内容如下:

在这里插入图片描述

2.1.6 访问测试

访问:http://localhost:9200/ 返回以下信息 ,提示需输入账号密码,账号为 elastic 密码在启动时输出信息中可见,确认后展示以下信息:

在这里插入图片描述

成功!!

2.1.7 生成kibana账号

elastic账号是无法用于kibana的登陆的,所以需要自行创建账号,并授权,cmd定位到es运行时(bin)目录输入以下命令

elasticsearch-users useradd 用户名

接着会提示输入密码,键入密码即可.这里用户创建完成

角色授权操作

elasticsearch-users roles -a superuser 用户名
elasticsearch-users roles -a kibana_system 用户名

superuser能正常打开es的9200端口,kibana_system配置后才可以正常对接kb和es

查看授权

elasticsearch-users roles -v 用户名

在这里插入图片描述
授权成功.

2.2 kibana

2.2.1 改配置

修改config/kibana.yml

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "localhost"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: true
#server.ssl.certificate: E:\ELK\kibana-8.9.2\config\certs\http_ca.crt
#server.ssl.key: /path/to/your/server.key# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
#enterpriseSearch.host: 'http://localhost:3002'# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"elasticsearch.hosts: ["http://localhost:9200"]elasticsearch.username: "user-kibana"
elasticsearch.password: "123456"# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "eyJ2ZXIiOiI4LjkuMiIsImFkciI6WyIxOTIuMTY4LjMuNzQ6OTIwMCJdLCJmZ3IiOiI4MjBlYjQwZWJlZDZkMjczZjU0NzM5ZWY2Y2EyNDFkOTY3YmYzMDFlYWYyMGU5M2E4YWRkNGExMjlhNTBlMzAwIiwia2V5IjoiRzA1akE0OEI4R3Rabk1QRmNXVUU6Um45QmsxRWtTWU9pcFlxdW1haVlQQSJ9"# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
logging.root.level: info# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
i18n.locale: "zh-CN"# =================== Frequently used (Optional)===================# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

2.2.2 启动

在这里插入图片描述

双击

在这里插入图片描述

启动成功

2.2.3 访问测试

访问http://localhost:5601,登录账号和密码

在这里插入图片描述

在这里插入图片描述

2.3 logstash

2.3.1 改配置

修改config/logstash-sample文件,也可复制一份修改

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.input {beats {port => 5044}
}output {file {path => "E:\ELK\logstash-8.9.2\logstash-test.log"                        #在web1节点本地生成一份日志文件}elasticsearch {hosts => ["http://localhost:9200"]index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"user => "elastic"password => "QFpuWk-MOuwIwXsE=TM6"}
}

2.3.2 启动

在bin目录下执行

logstash.bat -f ./config/logstash-sample.conf

在这里插入图片描述

成功

2.4 filebeat

2.4.1 改配置

修改filebeat.yml

###################### Filebeat Configuration Example ########################## This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.# ============================== Filebeat inputs ===============================filebeat.inputs:# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.# filestream is an input for collecting log messages from files.
- type: filestream# Unique ID among all inputs, an ID is required.id: my-filestream-id# Change to true to enable this input configuration.enabled: true# Paths that should be crawled and fetched. Glob based paths.paths:- E:\logs\*.log#- c:\programdata\elasticsearch\logs\*# Exclude lines. A list of regular expressions to match. It drops the lines that are# matching any regular expression from the list.# Line filtering happens after the parsers pipeline. If you would like to filter lines# before parsers, use include_message parser.#exclude_lines: ['^DBG']# Include lines. A list of regular expressions to match. It exports the lines that are# matching any regular expression from the list.# Line filtering happens after the parsers pipeline. If you would like to filter lines# before parsers, use include_message parser.#include_lines: ['^ERR', '^WARN']# Exclude files. A list of regular expressions to match. Filebeat drops the files that# are matching any regular expression from the list. By default, no files are dropped.#prospector.scanner.exclude_files: ['.gz$']# Optional additional fields. These fields can be freely picked# to add additional information to the crawled log files for filtering#fields:#  level: debug#  review: 1# ============================== Filebeat modules ==============================filebeat.config.modules:# Glob pattern for configuration loadingpath: ${path.config}/modules.d/*.yml# Set to true to enable config reloadingreload.enabled: false# Period on which files under path should be checked for changes#reload.period: 10s# ======================= Elasticsearch template setting =======================setup.template.settings:index.number_of_shards: 1#index.codec: best_compression#_source.enabled: false# ================================== General ===================================# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:# =================================== Kibana ===================================# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:# Kibana Host# Scheme and port can be left out and will be set to the default (http and 5601)# In case you specify and additional path, the scheme is required: http://localhost:5601/path# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601host: "localhost:5601"# Kibana Space ID# ID of the Kibana Space into which the dashboards should be loaded. By default,# the Default Space will be used.#space.id:# =============================== Elastic Cloud ================================# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:# ================================== Outputs ===================================# Configure what output to use when sending the data collected by the beat.# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:# Array of hosts to connect to.# hosts: ["localhost:9200"]# Protocol - either `http` (default) or `https`.#protocol: "https"# Authentication credentials - either API key or username/password.#api_key: "id:api_key"#username: "elastic"#password: "changeme"# ------------------------------ Logstash Output -------------------------------
output.logstash:# The Logstash hostshosts: ["localhost:5044"]enabled: true# Optional SSL. By default is off.# List of root certificates for HTTPS server verifications#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]# Certificate for SSL client authentication#ssl.certificate: "/etc/pki/client/cert.pem"# Client Certificate Key#ssl.key: "/etc/pki/client/cert.key"# ================================= Processors =================================
processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~# ================================== Logging ===================================# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.# Set to true to enable the monitoring reporter.
#monitoring.enabled: false# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:# ============================== Instrumentation ===============================# Instrumentation support for the filebeat.
#instrumentation:# Set to true to enable instrumentation of filebeat.#enabled: false# Environment in which filebeat is running on (eg: staging, production, etc.)#environment: ""# APM Server hosts to report instrumentation results to.#hosts:#  - http://localhost:8200# API Key for the APM Server(s).# If api_key is set then secret_token will be ignored.#api_key:# Secret token for the APM Server(s).#secret_token:# ================================= Migration ==================================# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

2.4.2 启动

在bin目录下执行

filebeat.exe -e -c filebeat.yml

在这里插入图片描述


http://www.ppmy.cn/news/1499777.html

相关文章

Qt 编译配置 Protobuf 详解

在Qt项目中使用Protobuf&#xff08;Protocol Buffers&#xff09;可以有效地处理数据序列化和反序列化。以下是如何在Qt项目中配置和编译Protobuf的详细步骤。 步骤 1: 安装Protobuf 首先&#xff0c;你需要在系统上安装Protobuf库。可以通过以下几种方式安装&#xff1a; …

Python 和 Boto3 生成 Amazon S3 对象的 HTTPS URL

在使用 Amazon S3 存储服务时,我们经常需要获取存储桶中对象的 HTTPS URL。这篇博文将详细介绍如何使用 Python 和 Boto3 库来实现这一功能。 背景 Amazon S3(Simple Storage Service)是一种广泛使用的云存储服务。在许多场景中,我们需要获取 S3 存储桶中对象的公开访问 …

如何给7Z分卷文件设置密码?简单几步给文件加上安全锁

在压缩7Z文件的时候&#xff0c;如果文件比较大&#xff0c;很多小伙伴都会把文件压缩成7Z分卷文件&#xff0c;那想要保护7Z分卷文件&#xff0c;要如何设置密码呢&#xff1f;不清楚的小伙伴&#xff0c;一起来看看吧&#xff01; 我们可以使用7-Zip解压缩文件&#xff0c;在…

primetime如何合并不同modes的libs到一个lib文件

首先&#xff0c;用primetime 抽 timing model 的指令如下。 代码如下&#xff08;示例&#xff09;&#xff1a; #抽lib时留一些margin, setup -max/hold -min set_extract_model_margin -port [get_ports -filter "!defined(clocks)"] -max 0.1 #抽lib extract_mod…

https改造-加载证书-DerInputStream.getLength(): lengthTag=111, too big.

文章目录 前言https改造-加载证书-DerInputStream.getLength(): lengthTag=111, too big.1. 失败处2. 失败原因3. 解决前言 如果您觉得有用的话,记得给博主点个赞,评论,收藏一键三连啊,写作不易啊^ _ ^。   而且听说点赞的人每天的运气都不会太差,实在白嫖的话,那欢迎常…

Unity3D中Instance创建实例问题详解

前言 在Unity3D开发中&#xff0c;对象的创建和管理是一个基础且重要的环节。Instance&#xff08;实例&#xff09;和Singleton&#xff08;单例&#xff09;是两种常见的对象创建方式&#xff0c;它们在Unity3D中有不同的应用场景和实现方法。本文将详细解析Unity3D中如何通…

个人电脑网络安全 之 防浏览器和端口溢出攻击 和 权限对系统的重要性

防浏览器和端口溢出攻击 该如何防 很多人都不明白 我相信很多人只知道杀毒软件 却不知道网络防火墙 防火墙分两种 &#xff1a; 1、 病毒防火墙 也就是我们说的杀毒软件 2、 网络防火墙 这是用来防软件恶意通信的 使用防火墙 有两种 1、 半开式规则…

Doris全方位教程+应用实例

Impala性能稍领先于presto,但是presto在数据源支持上非常丰富&#xff0c;包括hive、图数据库、传统关系型数据库、Redis等 缺点&#xff1a;这两种对hbase支持的都不好&#xff0c;presto 不支持&#xff0c;但是对hdfs、hive兼容性很好&#xff0c;其实这也是顺理成章的&…