大数据采集与分析技术

该博客介绍了如何使用shell脚本自动化创建Hive表、导入数据、更新数据以及将数据统计结果导入MySQL的过程。涉及Hive、DataX、Docker和MySQL等技术,实现了从数据导入到统计分析的自动化流程。
摘要由CSDN通过智能技术生成

大数据期末

先向mysql导一下数据

一 、自动创建hive表

启动集群

touch makehive.sh
#!/bin/bash

beg_date=`date -d "${1}" +%s`
end_date=`date -d "${2}" +%s`

if((beg_date >${end_date}));then
 echo "beg_date < end_date"
 exit 0;
fi

currentDate=""
for((i=${beg_date};i<=${end_date};i=i+86400))
do
 currentDate=`date -d @${i} +%Y%m%d`
 echo "-----create /${currentDate}-----"
 hive -e "use data;create table order_status_log${currentDate}(id string,order_id string,order_status string,operate_time string) row format delimited fields terminated by ',';"
done

进入hive创建库

CREATE DATABASE data;

执行脚本

sh makehive.sh 20200901 20200903

结果

hive> use data;
OK
Time taken: 0.459 seconds
hive> show databases;
OK
data
default
Time taken: 0.106 seconds, Fetched: 2 row(s)
hive> show tables;
OK
order_status_log20200901
order_status_log20200902
order_status_log20200903
order_status_log20211207
Time taken: 0.059 seconds, Fetched: 4 row(s)
hive> 

二、手动导入数据

先查看hive在那个机子上

[root@hadoop1 /]# hdfs haadmin -getServiceState nn1
standby
[root@hadoop1 /]# hdfs haadmin -getServiceState nn2
active

在nn2,也就是hadoop2

先备份一下

[root@hadoop1 job]# cd /root/datax/job
[root@hadoop1 job]# ls
log.json  test.json
[root@hadoop1 job]# cp log.json my.json
[root@hadoop1 job]# ls
log.json  my.json  test.json
[root@hadoop1 job]# 

照着这个格式改

{
    "job": {
        "setting": {
            "speed": {
                 "channel": 3
            },
            "errorLimit": {
                "record": 0,
                "percentage": 0.02
            }
        },
        "content": [
            {
                "reader": {
                    "name": "mysqlreader",
                    "parameter": {
                        "username": "root",
                        "password": "123456",
                        "column": [
                            "id",
                            "order_id",
                            "order_status",
                            "operate_time"
                        ],
                        "splitPk": "id",
                        "connection": [
                            {
                                "table": [
                                    "order_status_log20200901"
                                ],
                                "jdbcUrl": [
     "jdbc:mysql://192.168.174.10:3306/test"
                                ]
                            }
                        ]
                    }
                },
               "writer": {
                    "name": "hdfswriter",
                    "parameter": {
                        "defaultFS": "hdfs://192.168.174.11:8020",
                        "fileType": "text",
                        "path": "/user/hive/warehouse/data.db/order_status_log20200901",
                        "fileName": "order_status_log20200901",
                        "column": [
                            {
                                "name": "id",
                                "type": "STRING"
                            },
                            {
                                "name": "order_id",
                                "type": "STRING"
                            },
                            {
                                "name": "order_status",
                                "type": "STRING"
                            },
                            {
                                "name": "operate_time",
                                "type": "STRING"
                            }
                        ],
                        "writeMode": "append",
                        "fieldDelimiter": ",",
                        "compress":"GZIP"
                    }
                }
            }
        ]
    }
}

运行

python /root/datax/bin/datax.py /root/datax/job/my.json

检查

hive> use data;select * from order_status_log20200901;

三、自动导入脚本

创建脚本

touch dataxplas.sh
touch date
#!/bin/bash

#第一步:获取json文件中的日期,date这个文本是我手动创建的,并且执行echo 20200901 > /root/sh/date手动添加的
date=`cat /root/sh/date`

#第二步:获取json文件日期的后一天日期
afterday_timestamp=$[`date -d "${date}" +%s`+86400]
afterday=`date -d @${afterday_timestamp} +%Y%m%d`

#这一步是全局替换,将0901替换成0902
sed -i "s/order_status_log${date}/order_status_log${afterday}/g" /root/datax/job/my.json

#更新/root/sh/date文本中的日期,要和json文件中的日期保持一致
echo ${afterday} > /root/sh/date

#执行datax
python /root/datax/bin/datax.py /root/datax/job/my.json

date

20200901

执行2次

sh dataxplas.sh

四、自动插入分区表统计

创建分区表

hive> use data;
OK
Time taken: 0.408 seconds
hive> create table order_status_log(id string,order_id string,order_status string,operate_time string) partitioned by (day string) row format delimited Fields terminated by ',';
OK
Time taken: 0.255 seconds

脚本

[root@hadoop1 sh]# touch fenqu.sh
[root@hadoop1 sh]# vi fenqu.sh
#!/bin/bash

#此脚本中插入分区表的sql语句需要你自己补全才能够执行此脚本成功
#分区表中分区列的值为date(operate_time)的值

beg_date=`date -d "${1}" +%s`
end_date=`date -d "${2}" +%s`

if((beg_date >${end_date}));then
 echo "beg_date < end_date"
 exit 0;
fi

currentDate=""
for((i=${beg_date};i<=${end_date};i=i+86400))
do
  currentDate=`date -d @${i} +%Y%m%d`
  hive -e "
  set hive.exec.dynamic.partition.mode=nostrict;
  set hive.exec.dynamic.partition=true;
  set hive.exec.max.dynamic.partitions=1000;
  insert into table data.order_status_log partition(day) select id,order_id,order_status,operate_time,date(oper
ate_time) as day from data.order_status_log${currentDate};"
done

hive -S -e "select day,count(id) from data.order_status_log group by day;" > /root/sh/result

执行脚本

sh fenqu.sh 20200901 20200903

打开result是有数据的

五、上传到docter的mysql

[root@localhost ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
mysql        5.7       738e7101490b   5 days ago     448MB
centos       7         eeb6ee3f44bd   2 months ago   204MB
mysql        <none>    1d7aba917169   3 months ago   448MB
docker run -itd --name=mysql_container1 -p 8888:3306 -e MYSQL_ROOT_PASSWORD=123456 1d7aba917169 /bin/bash
[root@localhost ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS                                                  NAMES
63c12a242724   1d7aba917169   "docker-entrypoint.s…"   38 seconds ago   Up 38 seconds   33060/tcp, 0.0.0.0:8888->3306/tcp, :::8888->3306/tcp   mysql_container1
[root@localhost ~]# docker exec -it mysql_container1 /bin/bash
root@70aa8e71a6a3:~# service mysql start 
root@70aa8e71a6a3:~# service mysql status

开启数据库连接权限

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;

进数据库并创建库和表

mysql> CREATE DATABASE data;
Query OK, 1 row affected (0.00 sec)

mysql> use data;
Database changed
mysql> CREATE TABLE `hive_count`( `hive_date` DATE, `hive_result` INT )ENGINE=INNODB CHARSET=utf8;
Query OK, 0 rows affected (0.00 sec)

脚本

[root@hadoop1 sh]# cat  docutemysql.sh 
#!/bin/bash
user="root"
password="123456"
host="192.168.174.9"
port=8888

mysql_conn="mysql -h"$host" -u"$user" -P"${port}"  -p"$password""

cat /root/sh/result | while read day count
do
  $mysql_conn -e "INSERT INTO data.hive_count VALUES('$day','$count')"
done

执行后在验证,ok。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

天下伟陈

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值