目录
前言
本文介绍了为什么可以通过 ps -ef 配置文件名 的方式来过滤出 配置文件对应的进程
一、ps命令介绍
ps命令是Process Status的缩写,用于查看系统进程的状态,ps命令输出值非常多,通常会结合管道符 和grep 使用。
二、使用步骤
1.我们直接输入ps命令,不加任何参数。
代码如下(ps示例):
可以看到默认输出4列信息
PID: 运行着的命令(CMD)的进程编号
TTY: 命令所运行的位置(终端)
TIME: 运行着的该命令所占用的CPU处理时间
CMD: 该进程所运行的命令
[localhost@hadoop102 ~]$ ps
PID TTY TIME CMD
7699 pts/0 00:00:00 bash
7904 pts/0 00:00:07 java
8431 pts/0 00:00:00 ps
2.ps -ax 使用 -a 参数。
-a 代表 all。
同时加上x参数会显示没有控制终端的进程.
代码如下(示例):
[localhost@hadoop102 ~]$ ps -ax | less
PID TTY STAT TIME COMMAND
1 ? Ss 0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
2 ? S 0:00 [kthreadd]
4 ? S< 0:00 [kworker/0:0H]
6 ? R 0:00 [ksoftirqd/0]
7 ? S 0:01 [migration/0]
该命令输出结果很长,可以结合管道符 配合 more , less 命令使用
例如 ps -ax |less ps-ax |more
3.查看特定用户的进程
假如我们要查看"root"用户的进程, 可以使用ps -u root来查看
[localhost@hadoop102 ~]$ ps -u root
PID TTY TIME CMD
1 ? 00:00:05 systemd
2 ? 00:00:00 kthreadd
4 ? 00:00:00 kworker/0:0H
6 ? 00:00:00 ksoftirqd/0
7 ? 00:00:01 migration/0
8 ? 00:00:00 rcu_bh
4. 通过cpu和内存使用来过滤进程
把结果按照 CPU 或者内存用量来筛选,这样你就找到哪个进程占用了你的资源。要做到这一点,我们可以使用 aux 参数,来显示全面的信息:
ps -aux |less
[localhost@hadoop102 ~]$ ps -aux |less
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 128224 6932 ? Ss 18:06 0:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
root 2 0.0 0.0 0 0 ? S 18:06 0:00 [kthreadd]
root 4 0.0 0.0 0 0 ? S< 18:06 0:00 [kworker/0:0H]
root 6 0.0 0.0 0 0 ? S 18:06 0:00 [ksoftirqd/0]
root 7 0.0 0.0 0 0 ? S 18:06 0:01 [migration/0]
root 8 0.0 0.0 0 0 ? S 18:06 0:00 [rcu_bh]
root 9 0.0 0.0 0 0 ? S 18:06 0:06 [rcu_sched]
root 10 0.0 0.0 0 0 ? S< 18:06 0:00 [lru-add-drain]
可以通过 --sort命令来排序。
根据 CPU 使用来升序排序
ps -aux --sort -pcpu | less
[localhost@hadoop102 ~]$ ps -aux --sort -pcpu | less
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
atguigu 6196 10.2 6.0 3119292 307388 ? Sl 21:31 5:38 /opt/module/jdk1.8.0_212/bin/java -Dproc_namenode -Djava.net.preferIPv4Stack=true -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dyarn.log.dir=/opt/module/hadoop-3.1.3/logs -Dyarn.log.file=hadoop-atguigu-namenode-hadoop102.log -Dyarn.home.dir=/opt/module/hadoop-3.1.3 -Dyarn.root.logger=INFO,console -Djava.library.path=/opt/module/hadoop-3.1.3/lib/native -Dhadoop.log.dir=/opt/module/hadoop-3.1.3/logs -Dhadoop.log.file=hadoop-atguigu-namenode-hadoop102.log -Dhadoop.home.dir=/opt/module/hadoop-3.1.3 -Dhadoop.id.str=atguigu -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.namenode.NameNode
atguigu 6911 8.0 7.5 3110312 384080 ? Sl 21:35 4:06 /opt/module/jdk1.8.0_212/bin/java -Dproc_historyserver -Djava.net.preferIPv4Stack=true -Dmapred.jobsummary.logger=INFO,RFA -Dyarn.log.dir=/opt/module/hadoop-3.1.3/logs -Dyarn.log.file=hadoop-atguigu-historyserver-hadoop102.log -Dyarn.home.dir=/opt/module/hadoop-3.1.3 -Dyarn.root.logger=INFO,console -Djava.library.path=/opt/module/hadoop-3.1.3/lib/native -Dhadoop.log.dir=/opt/module/hadoop-3.1.3/logs -Dhadoop.log.file=hadoop-atguigu-historyserver-hadoop102.log -Dhadoop.home.dir=/opt/module/hadoop-3.1.3 -Dhadoop.id.str=atguigu -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
atguigu 6331 6.0 4.4 3108368 225884 ? Sl 21:31 3:18 /opt/module/jdk1.8.0_212/bin/java -Dproc_datanode -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=ERROR,RFAS -Dyarn.log.dir=/opt/module/hadoop-3.1.3/logs -Dyarn.log.file=hadoop-atguigu-datanode-hadoop102.log -Dyarn.home.dir=/opt/module/hadoop-3.1.3 -Dyarn.root.logger=INFO,console -Djava.library.path=/opt/module/hadoop-3.1.3/lib/native -Dhadoop.log.dir=/opt/module/hadoop-3.1.3/logs -Dhadoop.log.file=hadoop-atguigu-datanode-hadoop102.log -Dhadoop.home.dir=/opt/module/hadoop-3.1.3 -Dhadoop.id.str=atguigu -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml org.apache.hadoop.hdfs.server.datanode.DataNode
根据 内存使用 来升序排序
ps -aux --sort -pmem | less
[localhost@hadoop102 ~]$ ps -aux --sort -pmem | less
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
localhost 6911 7.7 7.6 3110312 386128 ? Sl 21:35 4:06 /opt/module/jdk1.8.0_212/bin/java -Dproc_historyserver -Djava.net.preferIPv4Stack=true -Dmapred.jobsummary.logger=INFO,RFA -Dyarn.log.dir=/opt/module/hadoop-3.1.3/logs -Dyarn.log.file=hadoop-atguigu-historyserver-hadoop102.log -Dyarn.home.dir=/opt/module/hadoop-3.1.3 -Dyarn.root.logger=INFO,console -Djava.library.path=/opt/module/hadoop-3.1.3/lib/native -Dhadoop.log.dir=/opt/module/hadoop-3.1.3/logs -Dhadoop.log.file=hadoop-atguigu-historyserver-hadoop102.log -Dhadoop.home.dir=/opt/module/hadoop-3.1.3 -Dhadoop.id.str=atguigu -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer
atguigu 7343 4.2 7.0 4742528 357800 ? Sl 21:36 2:14 /opt/module/jdk1.8.0_212/bin/java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -Djava.awt.headless=true -Xloggc:/opt/module/kafka/bin/../logs/kafkaServer-gc.log -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dkafka.logs.dir=/opt/module/kafka/bin/../logs -Dlog4j.configuration=file:/opt/module/kafka/bin/../config/log4j.properties -cp /opt/module/kafka/bin/../libs/activation-1.1.1.jar:/opt/module/kafka/bin/../libs/aopalliance-repackaged-2.5.0.jar:/opt/module/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/module/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/module/kafka/bin/../libs/commons-cli-1.4.jar:/opt/module/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/module/kafka/bin/../libs/connect-api-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-basic-auth-extension-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-file-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-json-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-mirror-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-mirror-client-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-runtime-2.4.1.jar:/opt/module/kafka/bin/../libs/connect-transforms-2.4.1.jar:/opt/module/kafka/bin/../libs/guava-20.0.jar:/opt/module/kafka/bin/../libs/hk2-api-2.5.0.jar:/opt/module/kafka/bin/../libs/hk2-locator-2.5.0.jar:/opt/module/kafka/bin/../libs/hk2-utils-2.5.0.jar:/opt/module/kafka/bin/../libs/jackson-annotations-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-core-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-databind-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-module-paranamer-2.10.0.jar:/opt/module/kafka/bin/../libs/jackson-module-scala_2.11-2.10.0.jar:/opt/module/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/module/kafka/bin/../libs/jakarta.annotation-api-1.3.4.jar:/opt/module/kafka/bin/../libs/jakarta.inject-2.5.0.jar:/opt/module/kafka/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/opt/module/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/module/kafka/bin/../libs/javassist-3.22.0-CR2.jar:/opt/module/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/module/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/module/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/module/kafka/bin/../libs/jersey-client-2.28.jar:/opt/module/kafka/bin/../libs/jersey-common-2.28.jar:/opt/module/kafka/bin/../libs/jersey-container-servlet-2.28.jar:/opt/module/kafka/bin/../libs/jersey-container-servlet-core-2.28.jar:/opt/module/kafka/bin/../libs/jersey-hk2-2.28.jar:/opt/module/kafka/bin/../libs/jersey-media-jaxb-2.28.jar:/opt/module/kafka/bin/../libs/jersey-server-2.28.jar:/opt/module/kafka/bin/../libs/jetty-client-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-http-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-io-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-security-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-server-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jetty-util-9.4.20.v20190813.jar:/opt/module/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/module/kafka/bin/../libs/kafka_2.11-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka_2.11-2.4.1-sources.jar:/opt/module/kafka/bin/../libs/kafka-clients-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka-log4j-appender-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka-streams-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka-streams-examples-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka-streams-scala_2.11-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka-streams-test-utils-2.4.1.jar:/opt/module/kafka/bin/../libs/kafka-tools-2.4.1.jar:/opt/module/kafka/bin/../libs/log4j-1.2.17.jar:/opt/module/kafka/bin/../libs/lz4-java-1.6.0.jar:/opt/module/kafka/bin/../libs/maven-artifact-3.6.1.jar:/opt/module/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/module/kafka/bin/../libs/netty-buffer-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-codec-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-common-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-handler-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-resolver-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-transport-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-transport-native-epoll-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/netty-transport-native-unix-common-4.1.45.Final.jar:/opt/module/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/opt/module/kafka/bin/../libs/paranamer-2.8.jar:/opt/module/kafka/bin/../libs/plexus-utils-3.2.0.jar:/opt/module/kafka/bin/../libs/reflections-0.9.11.jar:/opt/module/kafka/bin/../libs/rocksdbjni-5.18.3.jar:/opt/module/kafka/bin/../libs/scala-collection-compat_2.11-2.1.2.jar:/opt/module/kafka/bin/../libs/scala-java8-compat_2.11-0.9.0.jar:/opt/module/kafka/bin/../libs/scala-library-2.11.12.jar:/opt/module/kafka/bin/../libs/scala-logging_2.11-3.9.2.jar:/opt/module/kafka/bin/../libs/scala-reflect-2.11.12.jar:/opt/module/kafka/bin/../libs/slf4j-api-1.7.28.jar:/opt/module/kafka/bin/../libs/slf4j-log4j12-1.7.28.jar:/opt/module/kafka/bin/../libs/snappy-java-1.1.7.3.jar:/opt/module/kafka/bin/../libs/validation-api-2.0.1.Final.jar:/opt/module/kafka/bin/../libs/zookeeper-3.5.7.jar:/opt/module/kafka/bin/../libs/zookeeper-jute-3.5.7.jar:/opt/module/kafka/bin/../libs/zstd-jni-1.4.3-1.jar kafka.Kafka /opt/module/kafka/config/server.properties
也可以通过管道符配合head命令显示制定行数的信息
ps -aux --sort -pmem |head -n 10
按内存升序排序 显示前10行信息
5. 通过进程名和PID过滤
使用 -C 参数,后面跟你要找的进程的名字。比如想显示一个名为md的进程的信息
我们可以使用-f参数来查看格式化的信息列表
ps -f -C md
[localhost@hadoop102 ~]$ ps -f -C md
UID PID PPID C STIME TTY TIME CMD
root 37 2 0 18:06 ? 00:00:00 [md]
6.树形显示进程pstree
[localhost@hadoop102 ~]$ pstree
systemd─┬─ModemManager───2*[{ModemManager}]
├─NetworkManager───2*[{NetworkManager}]
├─VGAuthService
├─2*[abrt-watch-log]
├─abrtd
├─accounts-daemon───2*[{accounts-daemon}]
├─alsactl
├─at-spi-bus-laun─┬─dbus-daemon───{dbus-daemon}
│ └─3*[{at-spi-bus-laun}]
├─at-spi2-registr───2*[{at-spi2-registr}]
├─atd
├─auditd─┬─audispd─┬─sedispatch
│ │ └─{audispd}
│ └─{auditd}
├─avahi-daemon───avahi-daemon
├─boltd───2*[{boltd}]
├─chronyd
├─colord───2*[{colord}]
├─crond
├─cupsd
├─2*[dbus-daemon───{dbus-daemon}]
├─dbus-launch
├─dconf-service───2*[{dconf-service}]
├─dnsmasq───dnsmasq
├─evolution-addre─┬─evolution-addre───5*[{evolution-addre}]
│ └─4*[{evolution-addre}]
├─evolution-calen─┬─evolution-calen───8*[{evolution-calen}]
│ └─4*[{evolution-calen}]
├─evolution-sourc───3*[{evolution-sourc}]
├─fwupd───4*[{fwupd}]
├─gdm─┬─X───5*[{X}]
│ ├─gdm-session-wor─┬─gnome-session-b─┬─abrt-applet───2*[{abrt-applet}]
│ │ │ ├─gnome-shell─┬─ibus-daemon─┬─ibus-dconf───3*[{ibus-dconf}]
│ │ │ │ │ ├─ibus-engine-lib───2*[{ibus-engine-lib}]
│ │ │ │ │ └─2*[{ibus-daemon}]
│ │ │ │ └─20*[{gnome-shell}]
│ │ │ ├─gnome-software───3*[{gnome-software}]
│ │ │ ├─gsd-a11y-settin───3*[{gsd-a11y-settin}]
│ │ │ ├─gsd-account───3*[{gsd-account}]
│ │ │ ├─gsd-clipboard───2*[{gsd-clipboard}]
│ │ │ ├─gsd-color───3*[{gsd-color}]
│ │ │ ├─gsd-datetime───3*[{gsd-datetime}]
│ │ │ ├─gsd-disk-utilit───2*[{gsd-disk-utilit}]
│ │ │ ├─gsd-housekeepin───3*[{gsd-housekeepin}]
│ │ │ ├─gsd-keyboard───3*[{gsd-keyboard}]
│ │ │ ├─gsd-media-keys───3*[{gsd-media-keys}]
│ │ │ ├─gsd-mouse───3*[{gsd-mouse}]
│ │ │ ├─gsd-power───3*[{gsd-power}]
│ │ │ ├─gsd-print-notif───2*[{gsd-print-notif}]
│ │ │ ├─gsd-rfkill───2*[{gsd-rfkill}]
│ │ │ ├─gsd-screensaver───2*[{gsd-screensaver}]
│ │ │ ├─gsd-sharing───3*[{gsd-sharing}]
│ │ │ ├─gsd-smartcard───4*[{gsd-smartcard}]
│ │ │ ├─gsd-sound───3*[{gsd-sound}]
│ │ │ ├─gsd-wacom───2*[{gsd-wacom}]
│ │ │ ├─gsd-xsettings───3*[{gsd-xsettings}]
│ │ │ ├─nautilus-deskto───3*[{nautilus-deskto}]
│ │ │ ├─seapplet
│ │ │ ├─ssh-agent
│ │ │ ├─tracker-extract───14*[{tracker-extract}]
│ │ │ ├─tracker-miner-a───3*[{tracker-miner-a}]
│ │ │ ├─tracker-miner-f───3*[{tracker-miner-f}]
│ │ │ ├─tracker-miner-u───3*[{tracker-miner-u}]
│ │ │ └─3*[{gnome-session-b}]
│ │ └─2*[{gdm-session-wor}]
│ └─3*[{gdm}]
├─gnome-keyring-d─┬─ssh-agent
│ └─3*[{gnome-keyring-d}]
├─gnome-shell-cal───5*[{gnome-shell-cal}]
├─gnome-terminal-─┬─bash─┬─java───29*[{java}]
│ │ ├─5*[less]
│ │ └─pstree
│ ├─gnome-pty-helpe
│ └─3*[{gnome-terminal-}]
├─goa-daemon───3*[{goa-daemon}]
├─goa-identity-se───3*[{goa-identity-se}]
├─gsd-printer───2*[{gsd-printer}]
├─gssproxy───5*[{gssproxy}]
├─gvfs-afc-volume───3*[{gvfs-afc-volume}]
├─gvfs-goa-volume───2*[{gvfs-goa-volume}]
├─gvfs-gphoto2-vo───2*[{gvfs-gphoto2-vo}]
├─gvfs-mtp-volume───2*[{gvfs-mtp-volume}]
├─gvfs-udisks2-vo───2*[{gvfs-udisks2-vo}]
├─gvfsd─┬─gvfsd-burn───2*[{gvfsd-burn}]
│ ├─gvfsd-dnssd───2*[{gvfsd-dnssd}]
│ ├─gvfsd-network───3*[{gvfsd-network}]
│ ├─gvfsd-trash───2*[{gvfsd-trash}]
│ └─2*[{gvfsd}]
├─gvfsd-fuse───5*[{gvfsd-fuse}]
├─gvfsd-metadata───2*[{gvfsd-metadata}]
├─ibus-daemon─┬─ibus-dconf───3*[{ibus-dconf}]
│ └─2*[{ibus-daemon}]
├─ibus-portal───2*[{ibus-portal}]
├─2*[ibus-x11───2*[{ibus-x11}]]
├─imsettings-daem───3*[{imsettings-daem}]
├─irqbalance
├─java───57*[{java}]
├─java───50*[{java}]
├─java───86*[{java}]
├─java───53*[{java}]
├─java───70*[{java}]
├─java───47*[{java}]
├─ksmtuned───sleep
├─libvirtd───16*[{libvirtd}]
├─lsmd
├─lvmetad
├─master─┬─pickup
│ └─qmgr
├─mission-control───3*[{mission-control}]
├─nautilus───4*[{nautilus}]
├─packagekitd───2*[{packagekitd}]
├─polkitd───6*[{polkitd}]
├─pulseaudio───2*[{pulseaudio}]
├─rngd
├─rpcbind
├─rsyslogd───2*[{rsyslogd}]
├─rtkit-daemon───2*[{rtkit-daemon}]
├─smartd
├─sshd
├─systemd-journal
├─systemd-logind
├─systemd-udevd
├─tracker-store───7*[{tracker-store}]
├─tuned───4*[{tuned}]
├─udisksd───4*[{udisksd}]
├─upowerd───2*[{upowerd}]
├─vmtoolsd───{vmtoolsd}
├─vmtoolsd───3*[{vmtoolsd}]
├─wpa_supplicant
└─xdg-permission-───2*[{xdg-permission-}]
8、最终的过滤出进程号的方式
ps -ef | grep file-flume-kafka | grep -v grep |awk '{print \$2}' | xargs -n1 kill -9
(1)其中 grep -v grep 中的 -v 代表反过滤,即不包含哪些内容
因为我们上面的命令同时也带有要找寻的(file-flume-kafka)所以我们可以通过grep -v grep 将我们过滤的命令反过滤。
(2)AWK 是一种处理文本文件的语言,是一个强大的文本分析工具。
之所以叫 AWK 是因为其取了三位创始人 Alfred Aho,Peter Weinberger, 和 Brian Kernighan 的 Family Name 的首字符。
具体的可以参考:https://www.runoob.com/linux/linux-comm-awk.html
(3)xargs代表将前面过滤出来的结果放到 kill -9 的最后面来执行。
总结
因为-ef等同于-aux,具体就不解释了
所以ps -aux 等同于ps -ef
[localhost@hadoop102 flume]$ ps -aux | grep Application
localhost 7904 0.6 1.9 3446908 97356 pts/0 Sl 21:44 0:05 /opt/module/jdk1.8.0_212/bin/java -Xmx20m -cp /opt/module/flume/lib/*:/opt/module/hadoop-3.1.3/etc/hadoop:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/*:/opt/module/hadoop-3.1.3/share/hadoop/common/*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/*:/opt/module/hadoop-3.1.3/share/hadoop/yarn:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/*:/lib/* -Djava.library.path=:/opt/module/hadoop-3.1.3/lib/native org.apache.flume.node.Application --name a1 --conf-file conf/file-flume-kafka.conf
localhost 8270 0.0 0.0 112728 980 pts/0 S+ 21:59 0:00 grep --color=auto Application
最后面会加上 --conf-file conf/file-flume-kafka.conf 这些配置文件信息
/opt/module/hadoop-3.1.3/share/hadoop/yarn/*:/lib/* -Djava.library.path=:/opt/module/hadoop-3.1.3/lib/native org.apache.flume.node.Application --name a1 --conf-file conf/file-flume-kafka.conf