linux搭建 网络配置 JDK Redis ElasticSearch Logstash(FileBeat、MetricBeat) Kibana MySQL Docker Zookeeper

Linux搭建

https://repo.huaweicloud.com/centos/华为云镜像 

下载的是7.9.2009/isos/x86_64/CentOS-7-x86_64-DVD-2009.iso

安装后配置时区 以及添加用户 

关闭kdump

software selection 选择右边所有工具

在network&hostname中开启ens33 自动配置网络

删除vim /etc/sysconfig/network-scripts/ifcfg-ens33的uuid

环境配置

查看ens33的inet地址

ip a 

设置计算机name(root@name)

hostnamectl set-hostname       

下载vim查看工具

yum install -y vim       

下载wget

yum install -y wget

查看所有隐藏文件 

ls -a

移动文件

mv /root/xxxx /usr/local/xxxx

查找文件路径

绝对路径:find / -name redis

相对路径:find . -name redis

删除.wep文件 (vim 非法退出后 产生的文件)

rm -rf xxx.wep

拷贝文件至目标服务器

scp ./xxx/  ip:'pwd' 相同路径下

写文本至文件中

echo 2 > /xxxx/xx

网络配置

关闭防火墙

systemctl stop firewalld.service

查看防火墙状态

systemctl status firewalld.service

禁止防火墙服务器

systemctl disable firewalld.service

桥接模式

虚拟机配置桥接模式后

查看IP地址

ip a

修改ip

vim /etc/sysconfig/network-scripts/ifcfg-ens33
 

重启网络

systemctl restart network

如遇到network启动不了

将networkmanager服务停了

systemctl stop NetworkManager
systemctl disable NetworkManager

重启网卡,就ok了

systemctl restart network
systemctl status network
 

查看出口IP

curl cip.cc

SSH免密连接

如果A服务器想免密直接SSH至B服务器,那么需要在A服务器先ssh localhost生成.ssh,然后根据加密算法生成公钥私钥,然后把公钥添加至B服务器.ssh/authorized_keys

生成.ssh文件
输入:ssh localhos

dsa、rsa为不同的加密算法
输入:
    ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
    或
    ssh-keygen -t rsa

A服务器的公钥文件内容会追加写入到B服务器的 .ssh/authorized_keys 文件中(B服务器是否需要执行一次ssh localhost待测试)
A服务器输入:
    ssh-copy-id root@10.124.84.20 

另外,将公钥拷贝到服务器的~/.ssh/authorized_keys文件中方法有如下几种:
1、将公钥通过scp拷贝到服务器上,然后追加到~/.ssh/authorized_keys文件中,这种方式比较麻烦
scp -P 22 ~/.ssh/id_rsa.pub user@host:~/
2、通过ssh-copy-id程序,就是我演示的方法,ssh-copy-id user@host即可
3、可以通过cat ~/.ssh/id_rsa.pub | ssh -p 22 user@host ‘cat >> ~/.ssh/authorized_keys’,这个也是比较常用的方法,因为可以更改端口号。


安装nginx

官网下载地址:nginx: download

上传到服务器并解压

tar -zxvf nginx-1.24.0.tar.gz

进入解压目录并依次执行以下命令

./configure
make
make install

安装默认在/usr/local/目录下,进入/usr/local/nginx/sbin目录下并执行

./nginx

JDK安装

官网下载

Java Downloads | Oracle

tar包的安装

tar -zxvf /usr/local/jdk-8u181-linux-x64.tar.gz

配置环境变量

vim/etc/profile
输入“G”定位到最后一行,按“i”进入编辑模式,在最下面添加如下几行信息:

export JAVA_HOME=~/jdk1.8.0_144
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JRE_HOME=$JAVA_HOME/jre

Redis安装

下载tar包

wget https://download.redis.io/releases/redis-6.0.10.tar.gz

解压

tar zxvf redis-6.0.10.tar.gz

下载gcc

yum install -y gcc*

若redis版本在6.0以上 gcc需要升级到5.3版本以上

#升级到 5.3及以上版本
yum -y install centos-release-scl
yum -y install devtoolset-9-gcc devtoolset-9-gcc-c++ devtoolset-9-binutils
scl enable devtoolset-9 bash
#注意:scl命令启用只是临时的,推出xshell或者重启就会恢复到原来的gcc版本。
#如果要长期生效的话,执行如下:
echo "source /opt/rh/devtoolset-9/enable" >>/etc/profile

编译reids

make && make install

启动

redis-server redis.config

配置文件

bing 改成对应ip

daemonize yes

protected-mode no

加入服务

vim /usr/lib/systemd/system/redis.service

写入以下内容 改红色字的redis安装路径以及配置文件路径

--------------------------------------------------------------

[Unit]
Description=redis
After=network.target

[Service]
Type=forking
ExecStart=/usr/local/redis/src/redis-server /usr/local/redis/etc/redis.conf
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target

--------------------------------------------------------------

systemctl daemon-reload  //重新加载

systemctl enable redis.service //加入开机启动

systemctl start redis //开启redis服务

systemctl status redis //查看redis运行状态

批处理指令

cat redis.txt | redis-cli -p 6379 -h 192.168.31.128 --pipe

MySQL

华为云镜像下载mysql

wget https://repo.huaweicloud.com/mysql/Downloads/MySQL-8.0/mysql-8.0.23-1.el7.x86_64.rpm-bundle.tar

解压

tar -xvf mysql-8.0.23-1.el7.aarch64.rpm-bundle.tar -C ./mysql

查看mysql的安装状况,有则删除

rpm -qa | grep mysql

rpm -qa | grep mariadb

删除:rpm -e --nodeps mariadb-libs-5.5.68-1.el7.x86_64

至解压目录rpm安装

方法一(无网):后面加上 --force --nodeps
方法二(有网):yum install net-tools

rpm -ivh mysql-community-common-8.0.23-1.el7.x86_64.rpm --force --nodeps
rpm -ivh mysql-community-client-plugins-8.0.23-1.el7.x86_64.rpm --force --nodeps
rpm -ivh mysql-community-libs-8.0.23-1.el7.x86_64.rpm --force --nodeps
rpm -ivh mysql-community-client-8.0.23-1.el7.x86_64.rpm --force --nodeps 
rpm -ivh mysql-community-server-8.0.23-1.el7.x86_64.rpm --force --nodeps 

初始化数据库

mysqld --initialize;
chown mysql:mysql /var/lib/mysql -R;
systemctl start mysqld.service;
systemctl enable mysqld;

查看数据库的初始密码

cat /var/log/mysqld.log | grep password

修改密码校验难度

vim /etc/my.cof validate_password=off

修改密码

ALTER USER 'root'@'localhost' IDENTIFIED BY 'root'

授权远程访问数据库

use mysql;
select user,host from user;
用户的host要是%才可以

create user 'root'@'%' identified with mysql_native_password by 'root';
grant all privileges on . to 'root'@'%' with grant option;
flush privileges;

ElasticSearch Kibana rpm安装

官方连接

Install Elasticsearch with RPM | Elasticsearch Guide [7.16] | Elastic

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.13.2-x86_64.rpm
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.13.2-x86_64.rpm.sha512
shasum -a 512 -c elasticsearch-7.13.2-x86_64.rpm.sha512 
sudo rpm --install elasticsearch-7.13.2-x86_64.rpm
启动
systemctl start elasticsearch
配置文件路径
/etc/elasticsearch
启动bin位置
/usr/share/elasticsearch
配置文件:

# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html

# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: es_cluster

# Use a descriptive name for the node:
#
#node.name: node-1
node.name: node01
#
# Add custom attributes to the node:
#
#node.attr.rack: r1

# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch

# Lock the memory on startup:
#
#bootstrap.memory_lock: true
bootstrap.memory_lock: false
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.

# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.31.128
#
# Set a custom port for HTTP:
#
#http.port: 9200
transport.port: 9300
http.port: 9200
#
# For more information, consult the network module documentation.

# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discovery.seed_hosts: ["192.168.31.128:9300"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
cluster.initial_master_nodes: ["node01"]
#
# For more information, consult the discovery and cluster formation module documentation.

# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.

# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"

node.master: true
node.data: true

rpm安装把bootstrap.memory_lock: true时参考官方文档https://www.elastic.co/guide/en/elasticsearch/reference/7.13/setting-system-settings.html#systemd

添加 vim /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity
sudo systemctl daemon-reload
还不行得话vim /etc/security/limits.conf
root soft nofile 65535
root hard nofile 65535
* soft nofile 65535
* hard nofile 65535
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited

启动服务后http://192.168.31.128:9200/_nodes可以查看path 

IK分词器

下载源码https://github.com/medcl/elasticsearch-analysis-ik
Maven package后到release下复制zip文件至es/plugins/ik下后解压
重启es

ik/config

用来配置自定义词库
IKAnalyzer.cfg.xml

在IKAnalyzer.cfg.xml进行配制文件地址
在对应的文件中加入自已的词语,一般在custom/mydict.dic里进行增加主要词
                               在custom/ext_stopword.dic中进行增加停用词库。

原生内置的中文词库,总共有27万多条,只要是这些单词,都会被分在一起
main.dic

放了一些单位相关的词
quantifier.dic

放了一些后缀
suffix.dic

中国的姓氏
surname.dic

英文停用词
stopword.dic

中文停用词
extra_stopword.dic

源码

Kibana

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.11.1-x86_64.rpm

rpm -ivh kibana-7.11.1-x86_64.rpm

systemctl status kibana
systemctl start kibana
systemctl enable kibana

vim /etc/kibana/kibana.yml

server.host: "192.168.31.128"
server.port: 5601
i18n.locale: "zh-CN"
elasticsearch.hosts: ["http://192.168.31.128:9200"]

FileBeat
安装
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.11.1-linux-x86_64.tar.gz
tar -xvf filebeat-7.11.1-linux-x86_64.tar.gz
修改配置文件
vim filebeat.yml

# ============================== Filebeat inputs ===============================

filebeat.inputs:

- type: log
  enabled: true
  paths:
    - /usr/local/logs/*.log
# ================================= Dashboards =================================
setup.dashboards.enabled: true

# =================================== Kibana ===================================
setup.kibana:

  host: "192.168.31.128:5601"

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  hosts: ["192.168.31.128:9200"]

启动
sudo ./filebeat -e -c ./filebeat.yml 

MetricBeat
安装
curl -L -O https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-7.11.2-linux-x86_64.tar.gz
tar -xvf metricbeat-7.11.2-linux-x86_64.tar.gz
如果要开启redis、kibana、elasticsearch监控(配置metricbeat.reference.yml对应module)system默认开启
./metricbeat modules list
./metricbeat modules enable kibana
./metricbeat modules enable redis
./metricbeat modules enable elasticsearch
metricbeat.yml

#开启dashboards
setup.dashboards.enabled: true
#输出至kibana
setup.kibana:
  host: "172.16.90.24:5601"
#输出至elssticsearch
output.elasticsearch:
  hosts: ["172.16.90.24:9200"]

导入至kibana dashboard
 ./metricbeat setup --dashboards
配置redis
./metricbeat modules enable redis
vim /metricbeat-7.11.2-linux-x86_64/modules.d/redis.yml

- module: redis
  #metricsets:
  #  - info
  #  - keyspace
  period: 10s
  hosts: ["192.168.31.128:6379"]


Docker

容器生命周期

下载插件

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

修改成国内镜像源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

 检测最快的镜像源

yum makecache fast

安装docker社区版

yum -y install docker-ce

docker服务

systemctl start docker

下载镜像加速

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://zi36pf4x.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

常用指令

从远程仓库抽取镜像(hub.docker.com)

docker pull 镜像名:版本

查看本地镜像

docker images

创建容器 启动应用

方法一:
docker create 镜像名:版本 
docker start 容器id

方法二:
docker run 镜像名:版本

查看正在运行中的镜像容器

docker ps

查看存在的镜像容器

docker ps -a

停止容器至stop状态

方法一:
docker kill 容器id
方法二:
docker stop 容器id

暂停和继续容器服务 但还是running状态

docker pause 容器id

docker unpause 容器id

删除容器(-f 运行中的容器也强制删除)

docker rm <-f> 容器id 

删除镜像(-f 运行中的容器也强制删除)

docker rmi <-f> 镜像名:版本

启动容器应用指定虚拟机与docker端口(-d 后台运行)

docker run -p 虚拟机端口:容器端口 -d 镜像名:版本 (外部访问虚拟机端口)

进入容器

docker exec -it 容器id /bin/bash

Zookeeper

安装
wget Apache Downloads
解压
tar -xvf apache-zookeeper-3.7.0-bin.tar.gz
环境配置
vim /etc/profile

export ZOOKEEPER_HOME=/usr/local/apache-zookeeper-3.7.0-bin
export PATH=$PATH:$ZOOKEEPER_HOME/bin

配置文件 zoo.cfg
cp conf/zoo_sample.cfg zoo.cfg
mkdir /var/zookeeper 此位置与配置文件dataDir一致
echo 1 > /var/zookeeper/myid 此1的值与当前节点server.x 一致
vim /etc/hosts
192.168.31.128   node01
source /etc/hosts

dataDir=/var/zookeeper
sercer.1=node01:2888:3888
sercer.2=node02:2888:3888
sercer.3=node03:2888:3888

查看所有指令
zkServer.sh help
启动
前台启动:zkServer.sh start-foreground
后台启动:zkServer.sh start 

zkCli.sh指令

查看节点
ls /
查看节点值
get /xxxx

新增节点 (123节点值)
create /xxxx "123" 
新增临时节点
create -e /xxxx "123"
新增重名节点不覆盖
create -s /xxxx "123"

Hadoop

安装
wget https://archive.apache.org/dist/hadoop/common/hadoop-2.6.5/hadoop-2.6.5-src.tar.gz
解压
tar -xvf hadoop-2.6.5-src.tar.gz
环境配置
vim /etc/profile

export HADOOP_HOME=/opt/bigdata/hadoop-2.6.5
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin

配置文件

  • 环境变量配置hadoop-env.sh

        export JAVA_HOME=/usr/java/jdk

HDFS

  • NameNode配置core-site.xml

         <property>
                <name>fs.defaultFS</name>
                <value>hdfs://node01:9000</value>
         </property>

  • HDFS配置hdfs-site.xml

        副本数量(副本放置策略)

         <property>
                <name>dfs.replication</name>
                <value>1</value>
         </property>

        NameNode存储位置
         <property>
                <name>dfs.namenode.name.dir</name>
                <value>/var/bigdata/hadoop/local/dfs/name</value>
         </property>

        DataNode存储位置
         <property>
                <name>dfs.datanode.data.dir</name>
                <value>/var/bigdata/hadoop/local/dfs/data</value>
         </property>
         <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>node01:50090</value>
         </property>
         <property>
                <name>dfs.namenode.checkpoint.dir</name>
               <value>/var/bigdata/hadoop/local/dfs/secondary</value>
         </property>

  • DataNode配置 slaves

        node01

初始化

        hdfs namenode -format  
        创建目录并初始化一个空的fsimage

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值