Docker installation ELK (single node)
Create a docker network
docker network create -d bridge elastic
Pull elasticsearch version 8.4.3
docker pull /elasticsearch/elasticsearch:8.4.3 It might be this one docker pull elasticsearch:8.4.3
The first time I execute the docker script
docker run -it \ -p 9200:9200 \ -p 9300:9300 \ --name elasticsearch \ --net elastic \ -e ES_JAVA_OPTS="-Xms1g -Xmx1g" \ -e "=single-node" \ -e LANG=-8 \ -e LC_ALL=-8 \ elasticsearch:8.4.3
Note that the first time the script is executed, do not add the -d parameter, otherwise you will not see the random password and random enrollment token generated when the service is first run.
Copy the following content in the log, backup
✅ Elasticsearch security features have been automatically configured! ✅ Authentication is enabled and cluster connections are encrypted. ℹ️ Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`): =HjjCu=tj1orDTLJbWPv ℹ️ HTTP CA certificate SHA-256 fingerprint: 9204867e59a004b04c44a98d93c4609937ce3f14175a3eed7afa98ee31bbd4c2 ℹ️ Configure Kibana to use this cluster: • Run Kibana and click the configuration link in the terminal when Kibana starts. • Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes): eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMjIuMC4yOjkyMDAiXSwiZmdyIjoiOTIwNDg2N2U1OWEwMDRiMDRjNDRhOThkOTNjNDYwOTkzN2NlM2YxNDE3NWEzZWVkN2FmYTk4ZWUzMWJiZDRjMiIsImtleSI6Img0bGNvSkFCYkJnR1BQQXRtb3VZOnpCcjZQMUtZVFhHb1VDS2paazRHRHcifQ== ℹ️ Configure other nodes to join this cluster: • Copy the following enrollment token and start new Elasticsearch nodes with `bin/elasticsearch --enrollment-token <token>` (valid for the next 30 minutes): eyJ2ZXIiOiI4LjQuMyIsImFkciI6WyIxNzIuMjIuMC4yOjkyMDAiXSwiZmdyIjoiOTIwNDg2N2U1OWEwMDRiMDRjNDRhOThkOTNjNDYwOTkzN2NlM2YxNDE3NWEzZWVkN2FmYTk4ZWUzMWJiZDRjMiIsImtleSI6ImhZbGNvSkFCYkJnR1BQQXRtb3VLOjRZWlFkN1JIUk5PcVJqZTlsX2p6LXcifQ== If you're running in Docker, copy the enrollment token and run: `docker run -e "ENROLLMENT_TOKEN=<token>" /elasticsearch/elasticsearch:8.4.3`
Create the corresponding directory and copy the configuration file to the host
mkdir -p apps/elk8.4.3/elasticsearch # This cp command is executed in the /home/ubuntu directorydocker cp elasticsearch:/usr/share/elasticsearch/config apps/elk8.4.3/elasticsearch/ docker cp elasticsearch:/usr/share/elasticsearch/data apps/elk8.4.3/elasticsearch/ docker cp elasticsearch:/usr/share/elasticsearch/plugins apps/elk8.4.3/elasticsearch/ docker cp elasticsearch:/usr/share/elasticsearch/logs apps/elk8.4.3/elasticsearch/
Delete container
docker rm -f elasticsearch
Modify apps/elk8.4.3/elasticsearch/config/
vim apps/elk8.4.3/elasticsearch/config/
Add to
- Added:: true
- Note: The online status will not be displayed in kibana after adding this configuration, otherwise the offline status will be displayed.
Start elasticsearch
docker run -it \ -d \ -p 9200:9200 \ -p 9300:9300 \ --name elasticsearch \ --net elastic \ -e ES_JAVA_OPTS="-Xms1g -Xmx1g" \ -e "=single-node" \ -e LANG=-8 \ -e LC_ALL=-8 \ -v /home/ubuntu/apps/elk8.4.3/elasticsearch/config:/usr/share/elasticsearch/config \ -v /home/ubuntu/apps/elk8.4.3/elasticsearch/data:/usr/share/elasticsearch/data \ -v /home/ubuntu/apps/elk8.4.3/elasticsearch/plugins:/usr/share/elasticsearch/plugins \ -v /home/ubuntu/apps/elk8.4.3/elasticsearch/logs:/usr/share/elasticsearch/logs \ elasticsearch:8.4.3
Start verification
https://xxxxx:9200/ username:elastic Search for passwords in the information saved at the first startup
Kibana
Install Kibana
docker pull kibana:8.4.3
Start Kibana
docker run -it \ --restart=always \ --log-driver json-file \ --log-opt max-size=100m \ --log-opt max-file=2 \ --name kibana \ -p 5601:5601 \ --net elastic \ kibana:8.4.3
Initialize Kibana authentication credentials
http://xxxx:5601/?code=878708
Notice:
Fill in the textarea with relevant information generated by elasticsearch. Note that this token has only a valid period of 30 minutes. If it expires, you can only enter the container to reset the token and enter the container to execute
/bin/elasticsearch-create-enrollment-token -s kibana --url "https://127.0.0.1:9200"
Kibana Verification
Enter the verification code output from the server log into the browser, here is 628503
Create a kibana directory and copy related configuration information
mkdir apps/elk8.4.3/kibana # This cp command is executed in the /home/ubuntu directorydocker cp kibana:/usr/share/kibana/config apps/elk8.4.3/kibana/ docker cp kibana:/usr/share/kibana/data apps/elk8.4.3/kibana/ docker cp kibana:/usr/share/kibana/plugins apps/elk8.4.3/kibana/ docker cp kibana:/usr/share/kibana/logs apps/elk8.4.3/kibana/ sudo chown -R 1000:1000 apps/elk8.4.3/kibana
Modify apps/elk8.4.3/kibana/config/
### >>>>>>> BACKUP START: Kibana interactive setup (2024-03-25T07:30:11.689Z) # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target #: "0.0.0.0" #: "5s" #: [ "http://elasticsearch:9200" ] #: true ### >>>>>>> BACKUP END: Kibana interactive setup (2024-03-25T07:30:11.689Z) # This section was automatically generated during setup. : "zh-CN" : 0.0.0.0 : 5s # #This ip must be an elasticsearch container ip, you can use docker inspect | grep -i ipaddress: ['https://your ip:9200'] : true : AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MTEzNTE4MTA5NDM6ZHZ1R3M5cV9RRlc2NmQ3dE9WaWM0QQ : [/usr/share/kibana/data/ca_1711351811685.crt] : [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://your ip:9200'], ca_trusted_fingerprint: 5e7d9fe48c485c2761f9e7a99b9d5737e4e34dc55b9bf6929d929fb34d61a11a}]
Delete the container and restart
docker rm -f kibana docker run -it \ -d \ --restart=always \ --log-driver json-file \ --log-opt max-size=100m \ --log-opt max-file=2 \ --name kibana \ -p 5601:5601 \ --net elastic \ -v /home/ubuntu/apps/elk8.4.3/kibana/config:/usr/share/kibana/config \ -v /home/ubuntu/apps/elk8.4.3/kibana/data:/usr/share/kibana/data \ -v /home/ubuntu/apps/elk8.4.3/kibana/plugins:/usr/share/kibana/plugins \ -v /home/ubuntu/apps/elk8.4.3/kibana/logs:/usr/share/kibana/logs \ kibana:8.4.3
Logstash
Logstash pull mirror
docker pull logstash:8.4.3
start up
docker run -it \ -d \ --name logstash \ -p 9600:9600 \ -p 5044:5044 \ --net elastic \ logstash:8.4.3
Create a directory and synchronize configuration files
mkdir apps/elk8.4.3/logstash # This cp command is executed in the /home/ubuntu directorydocker cp logstash:/usr/share/logstash/config apps/elk8.4.3/logstash/ docker cp logstash:/usr/share/logstash/pipeline apps/elk8.4.3/logstash/ sudo cp -rf apps/elk8.4.3/elasticsearch/config/certs apps/elk8.4.3/logstash/config/certs sudo chown -R 1000:1000 apps/elk8.4.3/logstash
Modify the configuration apps/elk8.4.3/logstash/config/
: "0.0.0.0" : true : [ "http://your ip:9200" ] : "elastic" # The first time you start elasticsearch is to search for saved information L3WKr6ROTiK_DbqzBr8c: "L3WKr6ROTiK_DbqzBr8c" .certificate_authority: "/usr/share/logstash/config/certs/http_ca.crt" # The first time you start elasticsearch is to search in the saved information 5e7d9fe48c485c2761f9e7a99b9d5737e4e34dc55b9bf6929d929fb34d61a11a.ca_trusted_fingerprint: "5e7d9fe48c485c2761f9e7a99b9d5737e4e34dc55b9bf6929d929fb34d61a11a"
Modify the configuration apps/elk8.4.3/logstash/pipeline/
input { beats { port => 5044 } } filter { date { # Because in my log, my time field format is 2024-03-14T15:34:03+08:00, so I need to use the following two lines to configure it match => [ "time", "ISO8601" ] target => "@timestamp" } json { source => "message" } mutate { remove_field => ["message", "path", "version", "@version", "agent", "cloud", "host", "input", "log", "tags", "_index", "_source", "ecs", "event"] } } output { elasticsearch { hosts => ["https://your ip:9200"] index => "douyin-%{+}" ssl => true ssl_certificate_verification => false cacert => "/usr/share/logstash/config/certs/http_ca.crt" ca_trusted_fingerprint => "The first time to start elasticsearch is to search for the saved information in the saved information" user => "elastic" password => "The first time to start elasticsearch is to find UkNx8px1yrMYIht30QUc saved information" } }
Delete the container and restart
docker rm -f logstash docker run -it \ -d \ --name logstash \ -p 9600:9600 \ -p 5044:5044 \ --net elastic \ -v /home/ubuntu/apps/elk8.4.3/logstash/config:/usr/share/logstash/config \ -v /home/ubuntu/apps/elk8.4.3/logstash/pipeline:/usr/share/logstash/pipeline \ logstash:8.4.3
Filebeat
Filebeat pull mirror
sudo docker pull elastic/filebeat:8.4.3
FileBeat Startup
docker run -it \ -d \ --name filebeat \ --network elastic \ -e TZ=Asia/Shanghai \ elastic/filebeat:8.4.3 \ filebeat -e -c /usr/share/filebeat/ docker run -d --name filebeat \ -v /home/linyanbo/docker_data/filebeat/:/usr/share/filebeat/ \ -v /home/linyanbo/docker_data/filebeat/data:/usr/share/filebeat/data \ -v /var/logs/:/var/log \ --link elasticsearch:elasticsearch \ --network elastic \ --user root \ elastic/filebeat:8.4.3
Set up startup
docker update elasticsearch --restart=always
Configuration File
: - type: log enabled: true paths: - /var/logs/duty-admin// fields: log_source: oh-promotion fields_under_root: true : ^\d{4}-\d{1,2}-\d{1,2} : true : after scan_frequency: 5s close_inactive: 1h ignore_older: 24h : hosts: ["your ip:5044"]
input { beats { port => 5044 } } filter { # mutate { # split => {"message"=>" "} # } mutate { add_field => { "mm" => "%{message}" } } } output { elasticsearch { hosts => ["https://your ip:9200"] #index => "duty-admin%{+}" index => "duty-admin%{+YYYY}" ssl => true ssl_certificate_verification => false cacert => "/usr/share/logstash/config/certs/http_ca.crt" ca_trusted_fingerprint => "9204867e59a004b04c44a98d93c4609937ce3f14175a3eed7afa98ee31bbd4c2" user => "elastic" password => "=HjjCu=tj1orDTLJbWPv" } } output { stdout { codec => rubydebug } }
: "docker-cluster" : 0.0.0.0 #----------------------- BEGIN SECURITY AUTO CONFIGURATION ----------------------- # # The following settings, TLS certificates, and keys have been automatically # generated to configure Elasticsearch security features on 11-07-2024 05:54:41 # # -------------------------------------------------------------------------------- # Enable security features : true # Description: After adding this configuration, the online status will be displayed in kibana, otherwise the offline status will be displayed.: true : true # Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents : enabled: true : certs/http.p12 # Enable encryption and mutual authentication between cluster nodes : enabled: true verification_mode: certificate : certs/transport.p12 : certs/transport.p12 #----------------------- END SECURITY AUTO CONFIGURATION -------------------------
### >>>>>>> BACKUP START: Kibana interactive setup (2024-07-11T06:09:35.897Z) # # ** THIS IS AN AUTO-GENERATED FILE ** # # Default Kibana configuration for docker target #: "0.0.0.0" #: "5s" #: [ "http://elasticsearch:9200" ] #: true ### >>>>>>> BACKUP END: Kibana interactive setup (2024-07-11T06:09:35.897Z) # This section was automatically generated during setup. : 0.0.0.0 : 5s : ['https://your ip:9200'] : true : AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE3MjA2NzgxNzU2MzU6bU5RR25uQUVSaWExbUdHQ2tsODRmZw : [/usr/share/kibana/data/ca_1720678175894.crt] : [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://your ip:9200'], ca_trusted_fingerprint: 9204867e59a004b04c44a98d93c4609937ce3f14175a3eed7afa98ee31bbd4c2}]
Summarize
The above is personal experience. I hope you can give you a reference and I hope you can support me more.