实验步骤
准备工作
下载prometheus、grafana、windows_exporter、flume、jdk(linux)、mysql、mysql驱动包、idea、maven
监视面板搭建
解压Prometheus,点击Prometheus.exe
打开网址----localhost:9090
配置windows服务器监视
双击运行windows_exporter-0.25.1-amd64.msi,在浏览器进入以下地址 http://127.0.0.1:9182/metrics
编辑prometheus.yml(此文件在解压的Prometheus里面),添加框起来的内容,一定要注意格式
在同级目录下新建windows.yml
重启Prometheus(就是把黑框关掉,然后再双击Prometheus.exe)在浏览器打开 http://localhost:9090/targets
解压grafana,在bin目录打开grafana-server.exe
打开网址http://127.0.0.1:3000,默认账号密码都是admin
添加datasource
出现下边页面,即为搭建成功
Flume日志聚集搭建
打开一台centos7虚拟机 使用xtfp/SecureFX(看自己选择用哪个)上传flume、JDK安装包
在虚拟机输入命令mkdir /opt/software && cd /opt/software && mkdir /opt/module
然后再把这两个包拉过去
使用命令解压:
tar -zxvf /opt/software/apache-flume-1.7.0-bin.tar.gz -C /opt/module/ && mv
/opt/module/apache-flume-1.7.0-bin/ /opt/module/flume
tar -zxvf jdk-8u401-linux-x64.tar.gz -C /opt/module/ && mv /opt/module/jdk1.8.0_401/
/opt/module/jdk
配置环境变量 vi/etc/profile
刷新环境变量 source /etc/profile
启能flume配置文件 mv /opt/module/flume/conf/flume-env.sh.template /opt/module/flume/conf/flume-env.sh
编辑配置文件 vi /opt/module/flume/conf/flume-env.sh
ganglia监控搭建
yum -y install httpd php
yum -y install rrdtool perl-rrdtool rrdtool-devel
yum -y install apr-devel
安装ganglia
yum install -y epel-release
yum -y install ganglia-gmetad
yum -y install ganglia-web
yum install -y ganglia-gmond
安装telnet
yum install telnet -y
修改配置文件
vi /etc/httpd/conf.d/ganglia.conf 把框起来的内容添加上去
修改配置文件 vi /etc/ganglia/gmetad.conf 修改成自己的ip地址!!!!!
修改hosts文件 vi /etc/hosts
vi /etc/ganglia/gmond.conf
这里一定不要漏掉注释!!!!
禁用selinux
vi /etc/selinux/config
设置服务自启动
systemctl enable httpd && systemctl enable gmetad && systemctl enable gmond
关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
赋权
chmod -R 777 /var/lib/ganglia
重启虚拟机
init 6
访问,记得改成自己的ip地址
http://192.168.236.130/ganglia/
修改flume配置
vi /opt/module/flume/conf/flume-env.sh 下面的ip地址依旧是改成自己的!!!!!
创建job文件
mkdir /opt/module/flume/job
vi /opt/module/flume/job/flume-telnet-logger.conf
启动flume任务
/opt/module/flume/bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-telnet- logger.conf -Dflume.root.logger==INFO,console -Dflume.monitoring.type=ganglia - Dflume.monitoring.hosts=192.168.111.128:8649
重新开启一个xshell端口,发送数据(也可以在虚拟机里面重新打开一个终端执行)
telnet localhost 44444
查看网页监控情况
http://192.168.236.130/ganglia/?r=hour&cs=&ce=&c=master&h=master&tab=m&vn
flume日志聚集(MySQL) 安装mysql
使用xftp上传到/opt/software目录
删除mariadb依赖
rpm -e mariadb-libs-5.5.68-1.el7.x86_64 --nodeps
安装mysql服务
cd /opt/software
rpm -ivh mysql-community-common-5.7.26-1.el7.x86_64.rpm --nodeps
rpm -ivh mysql-community-libs-5.7.26-1.el7.x86_64.rpm --nodeps
rpm -ivh mysql-community-client-5.7.26-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.26-1.el7.x86_64.rpm --nodeps
启动mysqld服务
systemctl start mysqld
查找MySQL的密码
grep "password" /var/log/mysqld.log
mysql -uroot -p
成功登录
设置密码复杂度
set global validate_password_policy=LOW;
set global validate_password_length=4;
修改密码, 这里设置为root
alter user 'root'@'localhost' identified by 'root';
开启远程访问
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'root' WITH GRANT OPTION;
退出mysql exit
创建msyql source jar包
安装maven---打开idea,新建一个maven项目
如果本机没有jdk需要安装jdk!!!!
导入依赖
<dependencies>
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-core</artifactId>
<version>1.7.0</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.16</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>1.7.12</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.12</version>
</dependency>
</dependencies>
一定要注意缩进!!!!
需要刷新,成功导入就不会爆红了右键根目录,创建目录
创建资源文件夹
创建两个资源文件jdbc.properties和log4j.properties
添加配置
jdbc.properties
log4j.properties
#--------console-----------
log4j.rootLogger=info,myconsole,myfile log4j.appender.myconsole=org.apache.log4j.ConsoleAppender log4j.appender.myconsole.layout=org.apache.log4j.SimpleLayout #log4j.appender.myconsole.layout.ConversionPattern =%d [%t] %-5p [%c] - %m%n
#log4j.rootLogger=error,myfile
log4j.appender.myfile=org.apache.log4j.DailyRollingFileAppender log4j.appender.myfile.File=/tmp/flume.log log4j.appender.myfile.layout=org.apache.log4j.PatternLayout log4j.appender.myfile.layout.ConversionPattern =%d [%t] %-5p [%c] - %m%n
右键java目录,创建包
在包内新建两个类
SQLSourceHelper
package com.nuit.source; import org.apache.flume.Context; import org.apache.flume.conf.ConfigurationException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.IOException; import java.sql.*; import java.text.ParseException; import java.util.ArrayList; import java.util.List; import java.util.Properties; public class SQLSourceHelper { private static final Logger LOG = LoggerFactory.getLogger(SQLSourceHelper.class); private int runQueryDelay, //两次查询的时间间隔 startFrom, //开始id currentIndex, //当前id recordSixe = 0, //每次查询返回结果的条数 maxRow; //每次查询的最大条数 private String table, //要操作的表 columnsToSelect, //用户传入的查询的列 customQuery, //用户传入的查询语句 query, //构建的查询语句 defaultCharsetResultSet;//编码集 //上下文,用来获取配置文件 private Context context; //为定义的变量赋值(默认值),可在flume任务的配置文件中修改 private static final int DEFAULT_QUERY_DELAY = 10000; private static final int DEFAULT_START_VALUE = 0; private static final int DEFAULT_MAX_ROWS = 2000; private static final String DEFAULT_COLUMNS_SELECT = "*"; private static final String DEFAULT_CHARSET_RESULTSET = "UTF-8"; private static Connection conn = null; private static PreparedStatement ps = null; private static String connectionURL, connectionUserName, connectionPassword; //加载静态资源 static { Properties p = new Properties(); try { p.load(SQLSourceHelper.class.getClassLoader().getResourceAsStream("jdbc.properties")); connectionURL = p.getProperty("dbUrl"); connectionUserName = p.getProperty("dbUser"); connectionPassword = p.getProperty("dbPassword"); Class.forName(p.getProperty("dbDriver")); } catch (IOException | ClassNotFoundException e) { LOG.error(e.toString()); } } //获取JDBC连接 private static Connection InitConnection(String url, String user, String pw) { try { Connection conn = DriverManager.getConnection(url, user, pw); if (conn == null) throw new SQLException(); return conn; } catch (SQLException e) { e.printStackTrace(); } return null; } //构造方法 SQLSourceHelper(Context context) throws ParseException { //初始化上下文 this.context = context; //有默认值参数:获取flume任务配置文件中的参数,读不到的采用默认值 this.columnsToSelect = context.getString("columns.to.select", DEFAULT_COLUMNS_SELECT); this.runQueryDelay = context.getInteger("run.query.delay", DEFAULT_QUERY_DELAY); this.startFrom = context.getInteger("start.from", DEFAULT_START_VALUE); this.defaultCharsetResultSet = context.getString("default.charset.resultset", DEFAULT_CHARSET_RESULTSET); //无默认值参数:获取flume任务配置文件中的参数 this.table = context.getString("table"); this.customQuery = context.getString("custom.query"); connectionURL = context.getString("connection.url"); connectionUserName = context.getString("connection.user"); connectionPassword = context.getString("connection.password"); conn = InitConnection(connectionURL, connectionUserName, connectionPassword); //校验相应的配置信息,如果没有默认值的参数也没赋值,抛出异常 checkMandatoryProperties(); //获取当前的id currentIndex = getStatusDBIndex(startFrom); //构建查询语句 query = buildQuery(); } //校验相应的配置信息(表,查询语句以及数据库连接的参数) private void checkMandatoryProperties() { if (table == null) { throw new ConfigurationException("property table not set"); } if (connectionURL == null) { throw new ConfigurationException("connection.url property not set"); } if (connectionUserName == null) { throw new ConfigurationException("connection.user property not set"); } if (connectionPassword == null) { throw new ConfigurationException("connection.password property not set"); } } //构建sql语句 private String buildQuery() { String sql = ""; //获取当前id currentIndex = getStatusDBIndex(startFrom); LOG.info(currentIndex + ""); if (customQuery == null) { sql = "SELECT " + columnsToSelect + " FROM " + table; } else { sql = customQuery; } StringBuilder execSql = new StringBuilder(sql); //以id作为offset if (!sql.contains("where")) { execSql.append(" where "); execSql.append("id").append(">").append(currentIndex); return execSql.toString(); } else { int length = execSql.toString().length(); return execSql.toString().substring(0, length - String.valueOf(currentIndex).length()) + currentIndex; } } //执行查询 List<List<Object>> executeQuery() { try { //每次执行查询时都要重新生成sql,因为id不同 customQuery = buildQuery(); //存放结果的集合 List<List<Object>> results = new ArrayList<>(); if (ps == null) { // ps = conn.prepareStatement(customQuery); } ResultSet result = ps.executeQuery(customQuery); while (result.next()) { //存放一条数据的集合(多个列) List<Object> row = new ArrayList<>(); //将返回结果放入集合 for (int i = 1; i <= result.getMetaData().getColumnCount(); i++) { row.add(result.getObject(i)); } results.add(row); } LOG.info("execSql:" + customQuery + "\nresultSize:" + results.size()); return results; } catch (SQLException e) { LOG.error(e.toString()); // 重新连接 conn = InitConnection(connectionURL, connectionUserName, connectionPassword); } return null; } //将结果集转化为字符串,每一条数据是一个list集合,将每一个小的list集合转化为字符串 List<String> getAllRows(List<List<Object>> queryResult) { List<String> allRows = new ArrayList<>(); if (queryResult == null || queryResult.isEmpty()) return allRows; StringBuilder row = new StringBuilder(); for (List<Object> rawRow : queryResult) { Object value = null; for (Object aRawRow : rawRow) { value = aRawRow; if (value == null) { row.append(","); } else { row.append(aRawRow.toString()).append(","); } } allRows.add(row.toString()); row = new StringBuilder(); } return allRows; } //更新offset元数据状态,每次返回结果集后调用。必须记录每次查询的offset值,为程序中断续跑数据时使用,以id为offset void updateOffset2DB(int size) { //以source_tab做为KEY,如果不存在则插入,存在则更新(每个源表对应一条记录) String sql = "insert into flume_meta(source_tab,currentIndex) VALUES('" + this.table + "','" + (recordSixe += size) + "') on DUPLICATE key update source_tab=values(source_tab),currentIndex=values(currentIndex)"; LOG.info("updateStatus Sql:" + sql); execSql(sql); } //执行sql语句 private void execSql(String sql) { try { ps = conn.prepareStatement(sql); LOG.info("exec::" + sql); ps.execute(); } catch (SQLException e) { e.printStackTrace(); } } //获取当前id的offset private Integer getStatusDBIndex(int startFrom) { //从flume_meta表中查询出当前的id是多少 String dbIndex = queryOne("select currentIndex from flume_meta where source_tab='" + table + "'"); if (dbIndex != null) { return Integer.parseInt(dbIndex); } //如果没有数据,则说明是第一次查询或者数据表中还没有存入数据,返回最初传入的值 return startFrom; } //查询一条数据的执行语句(当前id) private String queryOne(String sql) { ResultSet result = null; try { ps = conn.prepareStatement(sql); result = ps.executeQuery(); while (result.next()) { return result.getString(1); } } catch (SQLException e) { e.printStackTrace(); } return null; } //关闭相关资源 void close() { try { ps.close(); conn.close(); } catch (SQLException e) { e.printStackTrace(); } } int getCurrentIndex() { return currentIndex; } void setCurrentIndex(int newValue) { currentIndex = newValue; } int getRunQueryDelay() { return runQueryDelay; } String getQuery() { return query; } String getConnectionURL() { return connectionURL; } private boolean isCustomQuerySet() { return (customQuery != null); } Context getContext() { return context; } public String getConnectionUserName() { return connectionUserName; } public String getConnectionPassword() { return connectionPassword; } String getDefaultCharsetResultSet() { return defaultCharsetResultSet; } }
SQLSource
package com.nuit.source; import org.apache.flume.Context; import org.apache.flume.Event; import org.apache.flume.EventDeliveryException; import org.apache.flume.PollableSource; import org.apache.flume.conf.Configurable; import org.apache.flume.event.SimpleEvent; import org.apache.flume.source.AbstractSource; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.text.ParseException; import java.util.ArrayList; import java.util.HashMap; import java.util.List; public class SQLSource extends AbstractSource implements Configurable, PollableSource { //打印日志 private static final Logger LOG = LoggerFactory.getLogger(SQLSource.class); //定义sqlHelper private SQLSourceHelper sqlSourceHelper; @Override public long getBackOffSleepIncrement() { return 0; } @Override public long getMaxBackOffSleepInterval() { return 0; } @Override public void configure(Context context) { try { //初始化 sqlSourceHelper = new SQLSourceHelper(context); } catch (ParseException e) { e.printStackTrace(); } } @Override public Status process() throws EventDeliveryException { try { //查询数据表 List<List<Object>> result = sqlSourceHelper.executeQuery(); //存放event的集合 List<Event> events = new ArrayList<>(); //存放event头集合 HashMap<String, String> header = new HashMap<>(); //如果有返回数据,则将数据封装为event if (!result.isEmpty()) { List<String> allRows = sqlSourceHelper.getAllRows(result); Event event = null; for (String row : allRows) { event = new SimpleEvent(); event.setBody(row.getBytes()); event.setHeaders(header); events.add(event); } //将event写入channel this.getChannelProcessor().processEventBatch(events); //更新数据表中的offset信息 sqlSourceHelper.updateOffset2DB(result.size()); } //等待时长 Thread.sleep(sqlSourceHelper.getRunQueryDelay()); return Status.READY; } catch (InterruptedException e) { LOG.error("Error procesing row", e); return Status.BACKOFF; } } @Override public synchronized void stop() { LOG.info("Stopping sql source {} ...", getName()); try { //关闭资源 sqlSourceHelper.close(); } finally { super.stop(); } } }
打Jar包,打包成功后,target目录会出现一个jar包
使用Xftp上传到虚拟机
打开虚拟机
tar -zxvf /opt/software/mysql-connector-java-5.1.16.tar.gz
cp /opt/software/mysql-connector-java-5.1.16/mysql-connector-java-5.1.16-bin.jar /opt/module/flume/lib/
创建job文件
cp /opt/software/mysql_source-1.0-SNAPSHOT.jar /opt/module/flume/lib/
vi /opt/module/flume/job/mysql.conf
创建mysql表
mysql -uroot -proot
CREATE DATABASE mysqlsource;
use mysqlsource
CREATE TABLE `students` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) NOT NULL,
PRIMARY KEY (`id`) );
CREATE TABLE `flume_meta` (
`source_tab` varchar(255) NOT NULL,
`currentIndex` varchar(255) NOT NULL,
PRIMARY KEY (`source_tab`) );
向表中添加数据
insert into student values(1,'zhangsan');
insert into student values(2,'lisi');
insert into student values(3,'wangwu');
insert into student values(4,'zhaoliu');
然后退出mysql,exit
开启flume服务
cd /opt/module/flume
bin/flume-ng agent --conf conf/ --name a1 \ --conf-file job/mysql.conf -Dflume.root.logger=INFO,console
我们插入数据的日志,至此搭建完成!