配置ssh&HDFS部署使用&如何保证数据质量(数据重刷机制)&官网参数

安装配置jdk、hosts、ssh

安装配置jdk

上传tar.gz包
[root@JD /]# cd /usr/java
[root@JD java]# ll
total 355840
drwxr-xr-x 8 root root      4096 Nov 17 00:24 jdk1.8.0_121
-rw-r--r-- 1 root root 191100510 Nov 14 01:25 jdk1.8.0_121.zip
-rw-r--r-- 1 root root 173271626 Nov 28 14:13 jdk-8u45-linux-x64.gz
[root@JD java]# pwd
/usr/java

解压
[root@JD java]# tar -zxvf jdk-8u45-linux-x64.gz

必须修正jdk的用户和用户组权限

[root@JD java]# ll
total 355844
drwxr-xr-x 8   10  143      4096 Apr 11  2015 jdk1.8.0_45
-rw-r--r-- 1 root root 173271626 Nov 28 14:13 jdk-8u45-linux-x64.gz

修正用户和用户组权限
[root@JD java]# chown -R root:root jdk1.8.0_45
[root@JD java]# ll
total 355844
drwxr-xr-x 8 root root      4096 Apr 11  2015 jdk1.8.0_45
-rw-r--r-- 1 root root 173271626 Nov 28 14:13 jdk-8u45-linux-x64.gz

全局配置环境变量
export JAVA_HOME=/usr/java/jdk1.8.0_45
export PATH=$JAVA_HOME/bin:$PATH

使环境变量生效
[root@JD java]# source /etc/profile

查看是否生效
[root@JD java]# which java
/usr/java/jdk1.8.0_45/bin/java

配置hosts

切记,前两行系统自带的千万不要删

[root@JD java]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6


192.168.0.3 JD

配置ssh

使用root用户配置ssh时,不需要给authorized_keys赋权限;使用非root用户配置ssh时,必须要给authorized_keys赋0600权限,否则配置的ssh不生效

创建用户
[root@JD /]# useradd hadoop 
[root@JD /]# id hadoop
uid=1002(hadoop) gid=1003(hadoop) groups=1003(hadoop)

切换用户
[root@JD ~]# su - hadoop
Last failed login: Thu Nov 28 12:17:14 CST 2019 from 40.76.65.78 on ssh:notty
There were 114 failed login attempts since the last successful login.
[hadoop@JD ~]$ pwd
/home/hadoop

生成公私钥
[hadoop@JD ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:saXNfI0OAlf988yUl7lv0mwHjOaOzFpIdUjo6KOyXz8 hadoop@JD
The key's randomart image is:
+---[RSA 2048]----+
|         .o.     |
|        .o ..    |
|      .oo + ..  +|
|      .o.X . oo+o|
|     .  S = oo.*o|
|      o. o +o o.+|
|     .... .o.  +.|
|  . .. .E+ .. . B|
|  .+.   oo+..  +.|
+----[SHA256]-----+

查看隐藏文件
[hadoop@JD ~]$ ll -a
total 12
drwx------  3 hadoop hadoop  70 Nov 28 15:30 .
drwxr-xr-x. 5 root   root    43 Nov 28 15:08 ..
-rw-r--r--  1 hadoop hadoop  18 Apr 11  2018 .bash_logout
-rw-r--r--  1 hadoop hadoop 193 Apr 11  2018 .bash_profile
-rw-r--r--  1 hadoop hadoop 231 Apr 11  2018 .bashrc
drwx------  2 hadoop hadoop  36 Nov 28 15:30 .ssh

进入.ssh文件
[hadoop@JD ~]$ cd .ssh
查看生成的公私钥
[hadoop@JD .ssh]$ ll
total 8
-rw------- 1 hadoop hadoop 1675 Nov 28 15:30 id_rsa
-rw-r--r-- 1 hadoop hadoop  391 Nov 28 15:30 id_rsa.pub

将公钥放置到对方机器的用户目录下
[hadoop@JD .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop@JD .ssh]$ ll
total 12
-rw-rw-r-- 1 hadoop hadoop  391 Nov 28 15:32 authorized_keys
-rw------- 1 hadoop hadoop 1675 Nov 28 15:30 id_rsa
-rw-r--r-- 1 hadoop hadoop  391 Nov 28 15:30 id_rsa.pub

测试ssh是否生效,我们看到还是需要输入密码,也就没生效,因为是非root用户,需要给authorized_keys赋0600权限

[hadoop@JD .ssh]$ ssh JD date
The authenticity of host 'jd (192.168.0.3)' can't be established.
ECDSA key fingerprint is SHA256:OLqoaMxlGFbCq4sC9pYgF+FdbcXHbEbtSrnMiGGFbVw.
ECDSA key fingerprint is MD5:d3:5b:4a:ef:8e:00:41:a0:5e:80:ef:75:76:8a:a3:49.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'jd,192.168.0.3' (ECDSA) to the list of known hosts.
hadoop@jd's password: 
Permission denied, please try again.
hadoop@jd's password: 
Permission denied, please try again.
hadoop@jd's password: 
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

赋0600权限
[hadoop@JD .ssh]$ chmod 0600 authorized_keys  
测试ssh,成功
[hadoop@JD .ssh]$ ssh JD date               
Thu Nov 28 15:37:31 CST 2019

hadoop介绍

简单介绍

http://archive.cloudera.com/cdh5/cdh/5/
http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.16.2.tar.gz

hadoop-2.6.0-cdh5.16.2.tar.gz 这个tar包,并不仅仅指的对应的是hadoop2.6.0,是apache hadoop2.6.0 + 以后的patch==apache hadoop2.9的版本

cdh对版本升级的日志

http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.16.2-changes.log
http://archive.cloudera.com/cdh5/cdh/5/hbase-1.2.0-cdh5.16.2-changes.log

选择cdh的好处:版本兼容性

hadoop软件

  • hdfs
    负责存储

  • mapreduce
    负责计算,作业,有价值的数据挖掘
    但是由于开发难度大、复杂度高、代码量大、维护困难、计算速度慢,都使用hive sql 、spark、flink

  • yarn
    资源调度和作业调度

hdfs的伪分布式部署

  • 上传文件并修改用户、用户组

      使用root用户
      [root@JD /]# ll
      total 522100
      -rw-r--r--    1 root root 434354462 Nov 28 14:15 hadoop-2.6.0-cdh5.16.2.tar.gz
      [root@JD /]# mv hadoop-2.6.0-cdh5.16.2.tar.gz  /home/hadoop/software/
      
      修改用户用户组
      [root@JD /]# chown hadoop:hadoop /home/hadoop/software/*
      
      查看是否修改成功
      [root@JD /]# ll /home/hadoop/software
      total 424176
      -rw-r--r-- 1 hadoop hadoop 434354462 Nov 28 14:15 hadoop-2.6.0-cdh5.16.2.tar.gz
    
  • 创建文件夹并解压tar包

      切换hadoop用户
      [root@JD ~]# su - hadoop
      Last login: Thu Nov 28 15:27:14 CST 2019 on pts/1
      Last failed login: Thu Nov 28 15:36:02 CST 2019 from 192.168.0.3 on ssh:notty
      There were 4 failed login attempts since the last successful login.
      [hadoop@JD ~]$ ll
      total 0
      [hadoop@JD ~]$ pwd
      /home/hadoop
      
      创建文件夹
      [hadoop@JD ~]$ mkdir app software data sourcecode log tmp lib
      [hadoop@JD ~]$ ll
      total 0
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 app#解压的文件夹 软连接
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 data#存放数据
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 lib#存放第三方jar
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 log#存放日志文件
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 software#存放tar包
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 sourcecode#源代码编译
      drwxrwxr-x 2 hadoop hadoop 6 Nov 28 17:20 tmp#临时文件
      
      解压tar包
      [hadoop@JD software]$ tar -zxvf hadoop-2.6.0-cdh5.16.2.tar.gz  -C ../app/
      
      [hadoop@JD software]$ cd ../app
      [hadoop@JD app]$ ll
      total 4
      drwxr-xr-x 14 hadoop hadoop 4096 Jun  3 19:11 hadoop-2.6.0-cdh5.16.2
      
      建立软连接
      [hadoop@JD app]$ ln -s hadoop-2.6.0-cdh5.16.2 hadoop 
      [hadoop@JD app]$ ll
      total 4
      lrwxrwxrwx  1 hadoop hadoop   22 Nov 28 17:32 hadoop -> hadoop-2.6.0-cdh5.16.2
      drwxr-xr-x 14 hadoop hadoop 4096 Jun  3 19:11 hadoop-2.6.0-cdh5.16.2
    
  • 配置jdk

      [hadoop@JD app]$ cd hadoop/etc/hadoop/
      [hadoop@JD hadoop]$ vi hadoop-env.sh
      export JAVA_HOME=/usr/java/jdk1.8.0_45
    
  • 配置文件
    配置core-site.xml

      	<configuration>
      		<property>
      			<name>fs.defaultFS</name>
      			<value>hdfs://JD:9000</value>
      		</property>
      	</configuration>
    

    配置hdfs-site.xml

          <configuration>
      		<property>
      			<name>dfs.replication</name>
      			<value>1</value>
      		</property>
      	</configuration>
    

配置hadoop的个人环境变量

[hadoop@JD ~]$ pwd
/home/hadoop

编辑.bashrc文件,配置hadoop环境变量
[hadoop@JD ~]$ vi .bashrc
export HADOOP_HOME=/home/hadoop/app/hadoop
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:$PATH

使环境变量生效
[hadoop@JD ~]$ source .bashrc
查看是否生效
[hadoop@JD ~]$ which hadoop
~/app/hadoop/bin/hadoop

格式化

格式化(看到successfully成功)
[hadoop@JD ~]$ hdfs namenode -format	 
 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.

第一次启动

第一次启动时,我们发现还需要输入yes,并没有使用配置的JD主机名启动,而是使用localhost和0.0.0.0启动,所以我们要修改datanode和secondarynamenode的配置

启动
[hadoop@JD ~]$ start-dfs.sh

19/11/28 17:59:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [JD]
JD: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-namenode-JD.out
The authenticity of host ‘localhost (::1)’ can’t be established.
ECDSA key fingerprint is SHA256:OLqoaMxlGFbCq4sC9pYgF+FdbcXHbEbtSrnMiGGFbVw.
ECDSA key fingerprint is MD5:d3:5b:4a:ef:8e:00:41:a0:5e:80:ef:75:76:8a:a3:49.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added ‘localhost’ (ECDSA) to the list of known hosts.
localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-datanode-JD.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host ‘0.0.0.0 (0.0.0.0)’ can’t be established.
ECDSA key fingerprint is SHA256:OLqoaMxlGFbCq4sC9pYgF+FdbcXHbEbtSrnMiGGFbVw.
ECDSA key fingerprint is MD5:d3:5b:4a:ef:8e:00:41:a0:5e:80:ef:75:76:8a:a3:49.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added ‘0.0.0.0’ (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-secondarynamenode-JD.out
19/11/28 18:00:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
[hadoop@JD ~]$ jps
27185 NameNode
27467 SecondaryNameNode
27309 DataNode
27583 Jps

配置DN SNN都以 JD启动

NN:JD fs.defaultFS控制的在core-site.xml文件中
DN: slaves文件配置
SNN: hdfs-site.xml文件配置

[hadoop@JD hadoop]$ pwd
/home/hadoop/app/hadoop/etc/hadoop
删除localhost,替换为JD
[hadoop@JD hadoop]$ vi slaves 
JD

添加属性
[hadoop@JD hadoop]$ vi hdfs-site.xml 
 <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>JD:50090</value>
</property>
<property>
    <name>dfs.namenode.secondary.https-address</name>
    <value>JD:50091</value>
</property>

重新启动,都是使用JD启动,并且不用输入yes,jps发现进程都起来了,成功

[hadoop@JD ~]$ start-dfs.sh
19/11/28 18:16:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [JD]
JD: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-namenode-JD.out
JD: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-datanode-JD.out
Starting secondary namenodes [JD]
JD: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.16.2/logs/hadoop-hadoop-secondarynamenode-JD.out
19/11/28 18:17:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[hadoop@JD ~]$ jps
30343 Jps
30027 DataNode
29886 NameNode
30190 SecondaryNameNode

3.12官网的参数文件 在哪里找

路径:https://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-common/core-default.xml

https://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

介绍三个节点和web登录路径

namenode 名称节点 老大 读写请求先经过它 主节点
datanode 数据节点 小弟 存储数据 检索数据 从节点
secondarynamenode 第二名称节点 老二 h+1
大数据组件基本都是主从架构 hdfs
hbase(读写请求不经过老大 master进程)

hdfs的web页面:http://JD:50070

hadoop fs的常规命令

创建文件
[hadoop@JD ~]$ echo "111111" > aaa.txt
[hadoop@JD ~]$ ll
total 4
-rw-rw-r-- 1 hadoop hadoop  7 Nov 28 22:06 aaa.txt
drwxrwxr-x 3 hadoop hadoop 48 Nov 28 17:32 app
drwxrwxr-x 2 hadoop hadoop  6 Nov 28 17:20 data
drwxrwxr-x 2 hadoop hadoop  6 Nov 28 17:20 lib
drwxrwxr-x 2 hadoop hadoop  6 Nov 28 17:20 log
drwxrwxr-x 2 hadoop hadoop 42 Nov 28 17:22 software
drwxrwxr-x 2 hadoop hadoop  6 Nov 28 17:20 sourcecode
drwxrwxr-x 2 hadoop hadoop  6 Nov 28 17:20 tmp

上传文件
[hadoop@JD ~]$ hadoop fs -put aaa.txt /
查看文件或文件夹
[hadoop@JD ~]$ hadoop fs -ls   /   
-rw-r--r--   1 hadoop supergroup          7 2019-11-28 22:07 /aaa.txt

创建文件夹
[hadoop@JD ~]$ hadoop fs -mkdir /bigdata
[hadoop@JD ~]$ hadoop fs -ls /
-rw-r--r--   1 hadoop supergroup          7 2019-11-28 22:07 /aaa.txt
drwxr-xr-x   - hadoop supergroup          0 2019-11-28 22:11 /bigdata

下载文件
[hadoop@JD ~]$ hadoop fs -get /aaa.txt

删除文件
[hadoop@JD ~]$ hadoop fs -rm /aaa.txt
[hadoop@JD ~]$ hadoop fs -ls /       
drwxr-xr-x   - hadoop supergroup          0 2019-11-28 22:11 /bigdata

配置文件的官方网址

hdfs-site.xml文件:https://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
core-site.xml文件:https://hadoop.apache.org/docs/r2.10.0/hadoop-project-dist/hadoop-common/core-default.xml

如何保证数据质量

本章节仅仅介绍解决数据量不相同的情况,内容不相同还不会。。。
在这里插入图片描述
存储是大数据的基石,如果hdfs不挂,保存的数据真的准确吗?
sqoop抽数据 无error 丢数据的 概率小
如果存储disk挂了,数据还准确吗?==》
如何校验是否准确 ? 如何让其准确? 机制 是不是必须要有!
数据质量校验:先校验数量是否相等,再校验内容是否相等
数据量校验 count
数据量相同–》数据量不相同 重刷机制 补or删 spark 95%–》数据内容相同 5% 抽样
查询出上游所有字段 下游只需查询出主键即可
mysql 所有的字段 主键 a表
phoenix 主键 b表

      a full outer join b
         
	多                            少
	1356                         135?
	135?没有来得及同步到下游    1356 没有来得及删 或者其他bug造成的

上游a表:

ID  NAME    AGE
1	xxx1	11
2	xxx2	12
3	xxx3	13
7   xxx7    17

下游b表

ID  NAME    AGE
1	xxx1	11
3	xxx3	13
5	xxx5 	15
6	xxx6 	16

数据重刷机制:用count校验上下游的数据不准确
引入重刷机制:通过对上下游的两个表求full outer join来对比字段的null值
进行full outer join

select a.id,a.name,a.age,b.id from a full join b on a1.id=b.id

结果为
ID NAME AGE ID
1 ruoze1 18 1
2 ruoze2 19 null
3 ruoze3 20 3
7 ruoze7 22 null
null null null 5
null null null 6
然后查询出aid为null的数据来,aid为null证明下游数据比上游数据多,需要删除下游数据

select from t where aid=null  下游数据多的  需要根据bid 构建 
delete from b where bid=5     
delete from b where bid=6

最终sql为
delete from b where id in (
select b.id from a
left join b
on a.id=b.id
where a.id is null
)

查询出bid为null的数据来,bid为null证明下游数据比上游数据少,需要根据上游数据补充下游数据

select from t where bid=null  下游数据少的  需要根据上游的所有字段补充
insert into 2	 ruoze2	 19 
insert into 7   ruoze7  22 

最终sql为
insert into b value(
select a.id ,a.name,a.age from b
right join a
on a.id=b.id	
where b.id is null
)

full outer join 其实就是先 left join 和后 right join 的两个结果,为 null 的刚好是缺少的或者多的

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值