ELK日志服务系统的安装部署_,90%的人看完都说好

先自我介绍一下,小编浙江大学毕业,去过华为、字节跳动等大厂,目前阿里P7

深知大多数程序员,想要提升技能,往往是自己摸索成长,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年最新软件测试全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。
img
img
img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上软件测试知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

如果你需要这些资料,可以添加V获取:vip1024b (备注软件测试)
img

正文

我们只需要切换为root用户执行一下操作就可以

[root@ip-10-0-2-153 elk]# vi /etc/sysctl.conf
[root@ip-10-0-2-153 elk]# tail -1 /etc/sysctl.conf
vm.max_map_count=655360
[root@ip-10-0-2-153 elk]# sysctl -p
vm.max_map_count = 655360

C、出现 **[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]**是因为每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量,解决办法在/etc/security/limits.conf最后添加一下内容,退出后重新登陆生效

  •           soft    nofile          65536
    
  •           hard    nofile          65536
    

D、出现Exception in thread “main” java.nio.file.AccessDeniedException: /usr/local/elk/elasticsearch/config/jvm.options,是因为使用非root用户执行是没有权限,解决办法

chown -R elk.elk elasticsearch/

重新启动后

[centos@ip-10-0-2-153 ~]$ cd /usr/local/elk/elasticsearch/bin/
[centos@ip-10-0-2-153 bin]$ ./elasticsearch -d
future versions of Elasticsearch will require Java 11; your Java version from [/home/deploy/java8/jre] does not meet this requirement

查询进程是否已启动

第二在从的主机上部署,部署过程和主一致

其中配置文件有配置如下,其余一致

#集群名称
cluster.name: cpct

节点名称

node.name: ip-10-0-2-111

存放数据目录,先创建该目录

path.data: /usr/local/elk/elasticsearch/data

存放日志目录,先创建该目录

path.logs: /usr/local/elk/elasticsearch/logs

节点IP

network.host: 10.0.2.111

tcp端口

transport.tcp.port: 9300

http端口

http.port: 9200

种子节点列表,主节点的IP地址必须在seed_hosts中

discovery.seed_hosts: [“10.0.2.153:9300”,“10.0.2.111:9300”]

主合格节点列表,若有多个主节点,则主节点进行对应的配置

cluster.initial_master_nodes: [“10.0.2.153:9300”]

主节点相关配置

是否允许作为主节点

node.master: false

是否保存数据

node.data: true
node.ingest: false
node.ml: false
cluster.remote.connect: false

跨域

http.cors.enabled: true
http.cors.allow-origin: “*”
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: “X-Requested-With, Content-Type, Content-Length, X-User”

第三在主的主机上配置elasticsearch-head

  1. 安装node服务

[root@ip-10-0-2-153 updates]# tar -xf node-v10.15.3-linux-x64.tar.gz
[root@ip-10-0-2-153 updates]# mv node-v10.15.3-linux-x64 …/
[root@ip-10-0-2-153 updates]# cd …/
[root@ip-10-0-2-153 elk]# mv node-v10.15.3-linux-x64/ node
[root@ip-10-0-2-153 elk]# tail -2 /etc/profile
export NODE_HOME=/usr/local/elk/node
export PATH= N O D E H O M E / b i n : NODE_HOME/bin: NODEHOME/bin:PATH
[root@ip-10-0-2-153 elk]# source /etc/profile
[root@ip-10-0-2-153 elk]# node -v
v10.15.3

2)从github上下载elasticsearch-node

[root@ip-10-0-2-153 elasticsearch-head]# yum -y install bzip2
[root@ip-10-0-2-153 elk]# git clone git://github.com/mobz/elasticsearch-head.git
Cloning into ‘elasticsearch-head’…
remote: Enumerating objects: 10, done.
remote: Counting objects: 100% (10/10), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 4347 (delta 0), reused 3 (delta 0), pack-reused 4337
Receiving objects: 100% (4347/4347), 2.49 MiB | 70.00 KiB/s, done.
Resolving deltas: 100% (2417/2417), done.
[root@ip-10-0-2-153 elk]# cd elasticsearch-head
[root@ip-10-0-2-153 elasticsearch-head]# wget https://github.com/Medium/phantomjs/releases/download/v2.1.1/phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@ip-10-0-2-153 elasticsearch-head]# cp phantomjs-2.1.1-linux-x86_64.tar.bz2 /tmp/phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@ip-10-0-2-153 elasticsearch-head]# npm install
npm WARN deprecated phantomjs-prebuilt@2.1.16: this package is now deprecated

phantomjs-prebuilt@2.1.16 install /usr/local/elk/elasticsearch-head/node_modules/phantomjs-prebuilt
node install.js

PhantomJS not found on PATH
Download already available at /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Verified checksum of previously downloaded file
Extracting tar contents (via spawned process)
Removing /usr/local/elk/elasticsearch-head/node_modules/phantomjs-prebuilt/lib/phantom
Copying extracted folder /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2-extract-1578400904569/phantomjs-2.1.1-linux-x86_64 -> /usr/local/elk/elasticsearch-head/node_modules/phantomjs-prebuilt/lib/phantom
Writing location.js file
Done. Phantomjs binary available at /usr/local/elk/elasticsearch-head/node_modules/phantomjs-prebuilt/lib/phantom/bin/phantomjs
npm notice created a lockfile as package-lock.json. You should commit this file.
npm WARN elasticsearch-head@0.0.0 license should be a valid SPDX license expression
npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.11 (node_modules/fsevents):
npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.11: wanted {“os”:“darwin”,“arch”:“any”} (current: {“os”:“linux”,“arch”:“x64”})

added 67 packages from 69 contributors and audited 1771 packages in 7.116s
found 40 vulnerabilities (19 low, 2 moderate, 19 high)
run npm audit fix to fix them, or npm audit for details

[root@ip-10-0-2-153 elasticsearch-head]# npm run start &
[1] 5731
[root@ip-10-0-2-153 elasticsearch-head]#

elasticsearch-head@0.0.0 start /usr/local/elk/elasticsearch-head
grunt server

Running “connect:server” (connect) task
Waiting forever…
Started connect web server on http://localhost:9100

第四测试

1)访问http://10.0.2.153:9200/检验elasticsearch是否部署成功

2)访问http://10.0.2.153:9200/_cluster/health?pretty=true检验整个集群是否通

3)最后访问http://10.0.2.153:9100查看具体情况

2、安装kibana服务

1)解压安装包并修改相关配置文件内容

[root@ip-10-0-2-211 elk]# tar -xf updates/kibana-7.5.1-linux-x86_64.tar.gz
[root@ip-10-0-2-211 elk]# mv kibana-7.5.1-linux-x86_64/ kibana
[root@ip-10-0-2-211 elk]# cd kibana/
[root@ip-10-0-2-211 kibana]# vim config/kibana.yml

只需要修改 kibana.yml的配置文件,将server.host后面改成本机地址,elasticsearch.hosts改成我们的原地址

2)启动kibana服务

[root@ip-10-0-2-211 kibana]# cd bin/
[root@ip-10-0-2-211 bin]# ./kibana
Kibana should not be run as root. Use --allow-root to continue.

[root@ip-10-0-2-211 bin]# ./kibana --allow-root &
[1] 30371

3)检查进程是否运行

[root@ip-10-0-2-211 bin]# ps -ef |grep node
root 30371 30179 32 10:01 pts/0 00:00:38 ./…/node/bin/node ./…/src/cli --allow-root
[root@ip-10-0-2-211 bin]# netstat -antpu |grep 5601
tcp 0 0 10.0.2.211:5601 0.0.0.0:* LISTEN 30371/./…/node/bin

4)创建系统启动服务并设置开机自启

[root@data-node1 system]# vim /lib/systemd/system/kibana.service

[Unit]
Description=Kibana
After=network.target

[Service]
Type=simple
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=logstash
User=root
WorkingDirectory=/usr/local/elk/kibana
ExecStart=/usr/local/elk/kibana/bin/kibana “-c /usr/local/elk/kibana/config/kibana.yml” --allow-root
KillMode=process
TimeoutStopSec=60
Restart=on-failure
RestartSec=5
RemainAfterExit=no

[Install]
WantedBy=multi-user.target

设置开机启动并查询服务状态

[root@server-mid ~]# systemctl enable kibana
[root@server-mid ~]# systemctl restart kibana
[root@server-mid ~]# systemctl status kibana
[root@data-node1 system]# systemctl status kibana
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-04-09 18:19:23 CST; 1h 54min ago
Main PID: 20338 (node)
CGroup: /system.slice/kibana.service
└─20338 /usr/local/elk/kibana/bin/…/node/bin/node /usr/local/elk/kibana/bin/…/src/cli --allow-root

Apr 09 19:44:24 data-node1 logstash[20338]: {“type”:“response”,“@timestamp”:“2020-04-09T11:44:24Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,“h
Apr 09 19:44:26 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:44:26Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,“h
Apr 09 19:44:44 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:44:44Z”,“tags”:[],“pid”:20338,“method”:“get”,“statusCode”:200,“req”:{“url”:“/built_a… (Windows NT
Apr 09 19:44:44 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:44:44Z”,“tags”:[],“pid”:20338,“method”:“get”,“statusCode”:200,“req”:{“url”:“/built_a…ows NT 10.0;
Apr 09 19:44:47 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:44:46Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,“h
Apr 09 19:44:47 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:44:46Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,“h
Apr 09 19:48:21 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:48:21Z”,“tags”:[],“pid”:20338,“method”:“get”,“statusCode”:200,“req”:{“url”:“/built_a…(Windows NT
Apr 09 19:48:25 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:48:23Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,“h
Apr 09 19:48:38 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:48:36Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,“h
Apr 09 19:59:20 data-node1 logstash[20338]: {“type”:“response”,”@timestamp":“2020-04-09T11:59:18Z”,“tags”:[],“pid”:20338,“method”:“post”,“statusCode”:200,“req”:{“url”:“/elasti…d”:“post”,"h
Hint: Some lines were ellipsized, use -l to show in full.

5)访问http://10.0.2.211:5601

5)配置登录验证

先在nginx配置文件里添加一下配置

server {
listen 80;
server_name 域名;

location / {
proxy_pass http://127.0.0.1:5601;

加上这两行

auth_basic “登陆验证”;
auth_basic_user_file /etc/nginx/htpasswd;
}

}

然后执行:
htpasswd -cm /etc/nginx/htpasswd royzelas     #/etc/nginx/htpasswd就是配置文件里面配置的密码文件,royzelas就是用户名
New password:     #输入密码
Re-type new password:     #再次输入密码,回车
Adding password for user crystal
最后在浏览器上访问 http://10.0.2.211得到如下

3、部署客户端的日志收集服务

1)部署logstash服务

解压安装包,并修改配置文件

[root@ip-10-0-2-95 elk]# tar -xf updates/logstash-7.5.1.tar.gz
[root@ip-10-0-2-95 elk]# mv logstash-7.5.1/ logstash
[root@ip-10-0-2-95 elk]# cd logstash/
[root@ip-10-0-2-95 logstash]# cd bin/
[root@ip-10-0-2-95 bin]# vim log_manage.conf

启动logstash服务

[root@ip-10-0-2-95 bin]# ./logstash -f log_manage.conf &
[1] 1920

log_manage.conf配置文件信息如下

input {
file{
path => “/var/log/messages” #指定要收集的日志文件
type => “system” #指定类型为system,可以自定义,type值和output{ } 中的type对应即可
start_position => “beginning” #从开始处收集
}
file{
path => “/home/deploy/activity_service/logs/gxzx-act-web.log”
type => “activity”
start_position => “beginning”
}
file{
path => “/home/deploy/tomcat8_manage/logs/catalina-*out”
type => “manage”
start_position => “beginning”
}
file{
path => “/home/deploy/tomcat8_coin/logs/catalina-*out”
type => “coin”
start_position => “beginning”
}
}
output {
if [type] == “system” { #如果type为system,
elasticsearch { #就输出到Elasticsearch服务器
hosts => [“10.0.2.153:9200”] #Elasticsearch监听地址及端口
index => “system-%{+YYYY.MM.dd}” #指定索引格式
}
}
if [type] == “activity” {
elasticsearch {
hosts => [“10.0.2.153:9200”]
index => “activity-%{+YYYY.MM.dd}”
}
}
if [type] == “coin” {
elasticsearch {
hosts => [“10.0.2.153:9200”]
index => “coin-%{+YYYY.MM.dd}”
}
}
if [type] == “manage” {
elasticsearch {
hosts => [“10.0.2.153:9200”]
index => “manage-%{+YYYY.MM.dd}”
}
}
}

查看服务是否启动

[root@ip-10-0-2-95 bin]# ps -ef |grep logstash
root 1920 1848 99 10:34 pts/0 00:00:33 /home/deploy/java8/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -Djruby.regexp.interruptible=true -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom -Dlog4j2.isThreadContextMapInheritable=true -cp /usr/local/elk/logstash/logstash-core/lib/jars/animal-sniffer-annotations-1.14.jar:/usr/local/elk/logstash/logstash-core/lib/jars/commons-codec-1.11.jar:/usr/local/elk/logstash/logstash-core/lib/jars/commons-compiler-3.0.11.jar:/usr/local/elk/logstash/logstash-core/lib/jars/error_prone_annotations-2.0.18.jar:/usr/local/elk/logstash/logstash-core/lib/jars/google-java-format-1.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/gradle-license-report-0.7.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/guava-22.0.jar:/usr/local/elk/logstash/logstash-core/lib/jars/j2objc-annotations-1.1.jar:/usr/local/elk/logstashlogstash-core/lib/jars/jackson-annotations-2.9.9.jar:/usr/local/elk/logstash/logstash-core/lib/jars/jackson-core-2.9.9.jar:/usr/local/elk/logstash/logstash-core/lib/jars/jackson-databind-2.9.9.3.jar:/usr/local/elk/logstash/logstash-core/lib/jars/jackson-dataformat-cbor-2.9.9.jar:/usr/local/elk/logstash/logstash-core/lib/jars/janino-3.0.11.jar:/usr/local/elk/logstash/logstash-core/lib/jars/javassist-3.24.0-GA.jar:/usr/local/elk/logstash/logstash-core/lib/jars/jruby-complete-9.2.8.0.jar:/usr/local/elk/logstash/logstash-core/lib/jars/jsr305-1.3.9.jar:/usr/local/elk/logstash/logstash-core/lib/jars/log4j-api-2.11.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/log4j-core-2.11.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/log4j-slf4j-impl-2.11.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/logstash-core.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.commands-3.6.0.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.contenttype-3.4.100.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.expressions-3.4.300.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.filesystem-1.3.100.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.jobs-3.5.100.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.resources-3.7.100.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.core.runtime-3.7.0.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.equinox.app-1.3.100.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.equinox.common-3.6.0.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.equinox.preferences-3.4.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.equinox.registry-3.5.101.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.jdt.core-3.10.0.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.osgi-3.7.1.jar:/usr/local/elk/logstash/logstash-core/lib/jars/org.eclipse.text-3.5.101.jar:/usr/local/elk/logstash/logstash-core/lib/jars/reflections-0.9.11.jar:/usr/local/elk/logstash/logstash-core/lib/jars/slf4j-api-1.7.25.jar org.logstash.Logstash -f log_manage.conf

设置系统启动服务和开机自启

[root@manage-host system]# vim /lib/systemd/system/logstash.service
[Unit]
Description=Logstash
After=network.target

[Service]

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip1024b (备注软件测试)
img

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
.logstash.Logstash -f log_manage.conf

设置系统启动服务和开机自启

[root@manage-host system]# vim /lib/systemd/system/logstash.service
[Unit]
Description=Logstash
After=network.target

[Service]

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip1024b (备注软件测试)
[外链图片转存中…(img-Sf2R52Ot-1713300944663)]

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

  • 8
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值