ELK6(1),2024软件测试面试真题精选干货整理

先自我介绍一下,小编浙江大学毕业,去过华为、字节跳动等大厂,目前阿里P7

深知大多数程序员,想要提升技能,往往是自己摸索成长,但自己不成体系的自学效果低效又漫长,而且极易碰到天花板技术停滞不前!

因此收集整理了一份《2024年最新软件测试全套学习资料》,初衷也很简单,就是希望能够帮助到想自学提升又不知道该从何学起的朋友。
img
img
img
img
img

既有适合小白学习的零基础资料,也有适合3年以上经验的小伙伴深入学习提升的进阶课程,涵盖了95%以上软件测试知识点,真正体系化!

由于文件比较多,这里只是将部分目录截图出来,全套包含大厂面经、学习笔记、源码讲义、实战项目、大纲路线、讲解视频,并且后续会持续更新

如果你需要这些资料,可以添加V获取:vip1024b (备注软件测试)
img

正文

安装ES

如果无法下载则需要自己下载上传到/root/product目录下
我是手动下载放在对应目录
elasticsearch下载地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
kibana下载地址:https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm
logstash下载地址:https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm

[root@master-node product]# mkdir -p /root/product
[root@master-node product]# cd /root/product
[root@master-node product]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
[root@master-node product]# rpm -ivh elasticsearch-6.4.0.rpm
warning: elasticsearch-6.4.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing… ################################# [100%]
Creating elasticsearch group… OK
Creating elasticsearch user… OK
Updating / installing…
1:elasticsearch-0:6.4.0-1 ################################# [100%]

NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service

You can start elasticsearch service by executing

sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch
[root@master-node product]#
[root@master-node product]# ll /etc/elasticsearch
total 28
-rw-rw----. 1 root elasticsearch 207 Sep 12 19:43 elasticsearch.keystore
-rw-rw----. 1 root elasticsearch 2869 Aug 18 07:23 elasticsearch.yml
-rw-rw----. 1 root elasticsearch 3009 Aug 18 07:23 jvm.options
-rw-rw----. 1 root elasticsearch 6380 Aug 18 07:23 log4j2.properties
-rw-rw----. 1 root elasticsearch 473 Aug 18 07:23 role_mapping.yml
-rw-rw----. 1 root elasticsearch 197 Aug 18 07:23 roles.yml
-rw-rw----. 1 root elasticsearch 0 Aug 18 07:23 users
-rw-rw----. 1 root elasticsearch 0 Aug 18 07:23 users_roles
[root@master-node product]#

jvm.options是 设置java相关的参数
-Xms1g
-Xmx1g
配置ES运行内存大小。
安装配置或者参考官网
https://www.elastic.co/guide/en/elasticsearch/reference/6.0/rpm.html
配置ES

[root@master-node elasticsearch]# more /etc/elasticsearch/elasticsearch.yml |grep -v “^#”
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
[root@master-node elasticsearch]#vim /etc/elasticsearch/elasticsearch.yml

vim小技巧大写GG跳到最后一行

添加

cluster.name: master-node # 集群中的名称,集群之间要一致
node.name: master # 该节点名称
node.master: true # 意思是该节点为主节点
node.data: true # 表示这不是数据节点
network.host: 0.0.0.0 # 监听全部ip,在实际环境中应设置为一个安全的ip
http.port: 9200 # es服务的端口号
discovery.zen.ping.unicast.hosts: [“192.168.220.71”, “192.168.220.72”] # 配置自动发现

从节点添加

cluster.name: master-node # 集群中的名称,集群之间要一致
node.name: master # 该节点名称
node.master: true # 意思是该节点为主节点
node.data: true # 表示这不是数据节点
network.host: 0.0.0.0 # 监听全部ip,在实际环境中应设置为一个安全的ip
http.port: 9200 # es服务的端口号
discovery.zen.ping.unicast.hosts: [“192.168.220.71”, “192.168.220.72”] # 配置自动发现

启动 先启动主节点,再启动从节点

systemctl start elasticsearch.service

日志查看

[root@master-node ~]# ls /var/log/elasticsearch/
[root@master-node ~]# tail -50f /var/log/messages

检查启动情况

[root@master-node elasticsearch]# curl ‘192.168.220.71:9200/_cluster/health?pretty’
{
“cluster_name” : “master-node”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 1,
“number_of_data_nodes” : 1,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
[root@master-node elasticsearch]#

检查集群情况

[root@master-node elasticsearch]# curl ‘192.168.220.71:9200/_cluster/state?pretty’
{
“cluster_name” : “master-node”,
“compressed_size_in_bytes” : 9574,
“cluster_uuid” : “OYnLCw6DSdeWet020B-zzA”,
“version” : 16,
“state_uuid” : “1GtRg_ZhT2qOPPJeyzPY_w”,
“master_node” : “45ktex-MTPKmE9Jpcd2HBQ”,
“blocks” : { },
“nodes” : {
“45ktex-MTPKmE9Jpcd2HBQ” : {
“name” : “master”,
“ephemeral_id” : “bHU_jIfUQ1KQvomp2Pyx_g”,
“transport_address” : “192.168.220.71:9300”,
“attributes” : {
“ml.machine_memory” : “1888342016”,
“xpack.installed” : “true”,
“ml.max_open_jobs” : “20”,
“ml.enabled” : “true”
}
},
“624Y_ao2Svq0wfbdmaqHUg” : {
“name” : “data-node1”,
“ephemeral_id” : “Do0nAllcSQmmtpNeocV3wA”,
“transport_address” : “192.168.220.72:9300”,
“attributes” : {
“ml.machine_memory” : “1913507840”,
“ml.max_open_jobs” : “20”,
“xpack.installed” : “true”,
“ml.enabled” : “true”
}
}
},

“snapshot_deletions” : {
“snapshot_deletions” : [ ]
}
}

出现此场景说明ES集群搭建成功

安装kibana

在ES主节点上安装kibana

[root@master-node ~]# cd /root/product
[root@master-node product]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm
[root@master-node product]# rpm -ivh kibana-6.0.0-x86_64.rpm
error: open of kibana-6.0.0-x86_64.rpm failed: No such file or directory
[root@master-node product]# rpm -ivh kibana-6.0.0-x86_64.rpm^C
[root@master-node product]# rpm -ivh kibana-6.4.0-x86_64.rpm
warning: kibana-6.4.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing… ################################# [100%]
Updating / installing…
1:kibana-6.4.0-1 ################################# [100%]
[root@master-node product]#

[root@master-node elasticsearch]#

编辑 kibana

[root@master-node product]# more /etc/kibana/kibana.yml |grep -v “^#”
[root@master-node product]# vim /etc/kibana/kibana.yml

添加如下配置

server.port: 5601 # 配置kibana的端口
server.host: 192.168.220.71 # 配置监听ip
elasticsearch.url: “http://192.168.220.71:9200” # 配置es服务器的ip,如果是集群则配置该集群中主节点的ip
logging.dest: /var/log/kibana.log # 配置kibana的日志文件路径,不然默认是messages里记录日志

创建日志赋权

[root@master-node product]# touch /var/log/kibana.log
[root@master-node log]# chmod 777 /var/log/kibana.log
启动kibana 查看进程
[root@master-node log]# systemctl start kibana
[root@master-node log]# ps aux |grep kibana
kibana 5307 37.8 9.1 1122624 168436 ? Rsl 21:23 0:11 /usr/share/kibana/bin/…/node/bin/node --no-warnings /usr/share/kibana/bin/…/src/cli -c /etc/kibana/kibana.yml
root 5362 0.0 0.0 112644 948 pts/0 R+ 21:24 0:00 grep --color=auto kibana
[root@master-node log]#
查看监听端口
[root@master-node log]# netstat -lntp |grep 5601
tcp 0 0 192.168.220.71:5601 0.0.0.0:* LISTEN 5307/node
[root@master-node log]#

浏览器访问 http://192.168.220.71:5601

这里写图片描述

安装logstash

logstash是日志收集的工具,安装在所要收集日志的机器上。
安装在192.168.220.72上安装logstash,但是要注意的是目前logstash不支持JDK1.9。
安装步骤多种方式安装参考
https://www.elastic.co/guide/en/logstash/current/installing-logstash.html

[root@data-node1 ~]# cd /root/product
[root@data-node1 product]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm
[root@data-node1 product]# rpm -ivh logstash-6.4.0.rpm
warning: logstash-6.4.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing… ################################# [100%]
Updating / installing…
1:logstash-1:6.4.0-1 ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
[root@data-node1 product]#

安装完之后,先配置logstash收集syslog日志:

[root@data-node1 ~]# vim /etc/logstash/conf.d/syslog.conf
input { # 定义日志源
syslog {
type => “system-syslog” # 定义类型
port => 10514 # 定义监听端口
}
}
output { # 定义日志输出
stdout {
codec => rubydebug # 将日志输出到当前的终端上显示
}
}
“/etc/logstash/conf.d/syslog.conf” [New] 12L, 248C written
[root@data-node1 ~]#

检测配置文件是否有错:

[root@data-node1 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-13T10:14:12,020][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>“path.queue”, :path=>“/var/lib/logstash/queue”}
[2018-09-13T10:14:12,081][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>“path.dead_letter_queue”, :path=>“/var/lib/logstash/dead_letter_queue”}
[2018-09-13T10:14:13,808][WARN ][logstash.config.source.multilocal] Ignoring the ‘pipelines.yml’ file because modules or command line options are specified
Configuration OK
[2018-09-13T10:14:21,559][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@data-node1 bin]#
#出现 Configuration OK 即可

–path.settings 用于指定logstash的配置文件所在的目录
-f 指定需要被检测的配置文件的路径
–config.test_and_exit 指定检测完之后就退出,不然就会直接启动了

配置kibana服务器的ip以及配置的监听端口:

[root@data-node1 bin]# vim /etc/rsyslog.conf
. @@192.168.220.71:10514

重启rsyslog,让配置生效:

[root@data-node1 bin]# systemctl restart rsyslog
[root@data-node1 bin]#

指定刚才的配置文件,启动logstash:
日志文件会输出到此终端
浏览器访问
http://192.168.220.72:10514/
或者开启一个新的终端
curl http://192.168.220.72:10514/
会在屏幕上打印日志即显示收集成功

[root@data-node1 ~]# cd /usr/share/logstash/bin
[root@data-node1 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-13T11:01:58,406][WARN ][logstash.config.source.multilocal] Ignoring the ‘pipelines.yml’ file because modules or command line options are specified
[2018-09-13T11:02:00,454][INFO ][logstash.runner ] Starting Logstash {“logstash.version”=>“6.4.0”}
[2018-09-13T11:02:08,785][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>“main”, “pipeline.workers”=>1, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>50}
[2018-09-13T11:02:09,979][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>“main”, :thread=>“#<Thread:0x1d9e66b1 run>”}
[2018-09-13T11:02:10,085][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-13T11:02:10,131][INFO ][logstash.inputs.syslog ] Starting syslog udp listener {:address=>“0.0.0.0:10514”}
[2018-09-13T11:02:10,180][INFO ][logstash.inputs.syslog ] Starting syslog tcp listener {:address=>“0.0.0.0:10514”}
[2018-09-13T11:02:11,596][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-09-13T11:03:00,340][INFO ][logstash.inputs.syslog ] new connection {:client=>“192.168.220.72:34664”}
{
“type” => “system-syslog”,
“facility_label” => “kernel”,
“facility” => 0,
“message” => “GET / HTTP/1.1\r\n”,
“severity_label” => “Emergency”,
“@version” => “1”,
“host” => “192.168.220.72”,
“severity” => 0,
“tags” => [
[0] “_grokparsefailure_sysloginput”
],
“@timestamp” => 2018-09-13T03:03:00.405Z,
“priority” => 0
}
{
“type” => “system-syslog”,
“facility_label” => “kernel”,
“facility” => 0,
“message” => “User-Agent: curl/7.29.0\r\n”,
“severity_label” => “Emergency”,
“@version” => “1”,
“host” => “192.168.220.72”,
“severity” => 0,
“tags” => [
[0] “_grokparsefailure_sysloginput”
],
“@timestamp” => 2018-09-13T03:03:00.568Z,
“priority” => 0
}

配置logstash

[root@data-node1 ~]# vim /etc/logstash/conf.d/syslog.conf

input { # 定义日志源
syslog {
type => “system-syslog” # 定义类型
port => 10514 # 定义监听端口
}
}
output { # 定义日志输出
elasticsearch {
hosts => [“192.168.220.71:9200”] # 定义es服务器的ip
index => “system-syslog-%{+YYYY.MM}” # 定义索引
}
}
~
“/etc/logstash/conf.d/syslog.conf” 13L, 305C written
[root@data-node1 ~]#

配置监听IP

[root@data-node1 ~]# vim /etc/logstash/logstash.yml
http.host: “192.168.220.72”

检测配置文件有没有错误

[root@data-node1 ~]# cd /usr/share/logstash/bin
[root@data-node1 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-13T11:11:11,314][WARN ][logstash.config.source.multilocal] Ignoring the ‘pipelines.yml’ file because modules or command line options are specified
Configuration OK
[2018-09-13T11:11:22,311][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@data-node1 bin]#

给日志赋权

[root@data-node1 ~]# chown logstash /var/log/logstash/logstash-plain.log
[root@data-node1 ~]# ll !$
ll /var/log/logstash/logstash-plain.log
-rw-r–r–. 1 logstash root 4688 Sep 13 11:11 /var/log/logstash/logstash-plain.log

赋权文件夹

[root@data-node1 ~]# ll /var/lib/logstash/
total 4
drwxr-xr-x. 2 root root 6 Sep 13 10:14 dead_letter_queue
drwxr-xr-x. 2 root root 6 Sep 13 10:14 queue
-rw-r–r–. 1 root root 36 Sep 13 10:25 uuid
[root@data-node1 ~]# chown -R logstash /var/lib/logstash/
[root@data-node1 ~]# ll /var/lib/logstash/
total 4
drwxr-xr-x. 2 logstash root 6 Sep 13 10:14 dead_letter_queue
drwxr-xr-x. 2 logstash root 6 Sep 13 10:14 queue
-rw-r–r–. 1 logstash root 36 Sep 13 10:25 uuid

重启logstash

[root@data-node1 ~]# systemctl restart logstash
[root@data-node1 ~]#

查看检测端口

[root@data-node1 ~]# netstat -lntp |grep 10514
tcp6 0 0 :::10514 ::😗 LISTEN 10922/java
[root@data-node1 ~]# netstat -lntp |grep 9600
tcp6 0 0 192.168.220.72:9600 ::😗 LISTEN 10922/java
[root@data-node1 ~]#

在浏览器上访问一下
http://192.168.220.72:10514/
或者 curl http://192.168.220.72:10514/

查看ES的索引

[root@data-node1 ~]# curl ‘192.168.220.71:9200/_cat/indices?v’
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2018.09 K1M6FtzXS7CLjfmJ4rfeog 5 1 5 0 59kb 33.4kb
green open .kibana k94rlEYtQi-AGx42BoTFiQ 1 1 1 0 8kb 4kb

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip1024b (备注软件测试)
img

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!
pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2018.09 K1M6FtzXS7CLjfmJ4rfeog 5 1 5 0 59kb 33.4kb
green open .kibana k94rlEYtQi-AGx42BoTFiQ 1 1 1 0 8kb 4kb

网上学习资料一大堆,但如果学到的知识不成体系,遇到问题时只是浅尝辄止,不再深入研究,那么很难做到真正的技术提升。

需要这份系统化的资料的朋友,可以添加V获取:vip1024b (备注软件测试)
[外链图片转存中…(img-dPxtKkpL-1713300380126)]

一个人可以走的很快,但一群人才能走的更远!不论你是正从事IT行业的老鸟或是对IT行业感兴趣的新人,都欢迎加入我们的的圈子(技术交流、学习资源、职场吐槽、大厂内推、面试辅导),让我们一起学习成长!

  • 9
    点赞
  • 30
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值