1、安装kafka web console软件前先安装sbt(安装步骤:http://blog.csdn.net/wwd0501/article/details/79267672)
2、我们从https://github.com/claudemamo/kafka-web-console上面将源码下载下来,并执行命令解压: unzip kafka-web-console-master.zip,然后用sbt进行编译,在编译前我们需要做如下的修改:
1、Kafka Web Console默认用的数据库是H2,它支持以下几种数据库:
H2 (default)
PostgreSql
Oracle
DB2
MySQL
Apache Derby
Microsoft SQL Server
为了方便,我们可以使用Mysql数据库,只要做如下修改即可,找到 conf/application.conf文件,并修改如下
将这个
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:file:play"
# db.default.user=sa
# db.default.password=""
修改成
db.default.driver=com.mysql.jdbc.Driver
db.default.url="jdbc:mysql://localhost:3306/kafkamonitor"
db.default.user=iteblog
db.default.pass=wyp
我们还需要修改build.sbt,加入对Mysql的依赖:
libraryDependencies ++= Seq(
jdbc,
cache,
"org.squeryl" % "squeryl_2.10" % "0.9.5-6",
"com.twitter" % "util-zk_2.10" % "6.11.0",
"com.twitter" % "finagle-core_2.10" % "6.15.0",
"org.quartz-scheduler" % "quartz" % "2.2.1",
"mysql" % "mysql-connector-java" % "5.1.9", #增加mysql配置
"org.apache.kafka" % "kafka_2.10" % "0.8.1.1"
exclude("javax.jms", "jms")
exclude("com.sun.jdmk", "jmxtools")
exclude("com.sun.jmx", "jmxri")
)
2、执行conf/evolutions/default/bak目录下面的1.sql、2.sql和3.sql三个文件。需要注意的是,这三个sql文件不能直接运行,有语法错误,需要做一些修改。修改后的sql(合并了1/2/3文件)如下:
CREATE TABLE `zookeepers` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`name` VARCHAR(200) NOT NULL DEFAULT '',
`host` VARCHAR(50) DEFAULT NULL,
`port` INT(11) DEFAULT NULL,
`statusId` INT(11) DEFAULT NULL,
`groupId` INT(11) DEFAULT NULL,
`chroot` VARCHAR(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `name_uni` (`name`)
) ENGINE=INNODB DEFAULT CHARSET=utf8
CREATE TABLE groups (
id INT,
NAME VARCHAR(200),
PRIMARY KEY (id)
);
CREATE TABLE STATUS (
id INT,
NAME VARCHAR(200),
PRIMARY KEY (id)
);
INSERT INTO groups (id, NAME) VALUES (0, 'ALL');
INSERT INTO groups (id, NAME) VALUES (1, 'DEVELOPMENT');
INSERT INTO groups (id, NAME) VALUES (2, 'PRODUCTION');
INSERT INTO groups (id, NAME) VALUES (3, 'STAGING');
INSERT INTO groups (id, NAME) VALUES (4, 'TEST');
INSERT INTO STATUS (id, NAME) VALUES (0, 'CONNECTING');
INSERT INTO STATUS (id, NAME) VALUES (1, 'CONNECTED');
INSERT INTO STATUS (id, NAME) VALUES (2, 'DISCONNECTED');
INSERT INTO STATUS (id, NAME) VALUES (3, 'DELETED');
CREATE TABLE offsetHistory (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
zookeeperId BIGINT,
topic VARCHAR(255),
UNIQUE (zookeeperId, topic)
);
CREATE TABLE offsetPoints (
id BIGINT AUTO_INCREMENT PRIMARY KEY,
consumerGroup VARCHAR(255),
TIMESTAMP TIMESTAMP,
offsetHistoryId BIGINT,
PARTITION INT,
OFFSET BIGINT,
logSize BIGINT
);
CREATE TABLE settings (
key_ VARCHAR(255) PRIMARY KEY,
VALUE VARCHAR(255)
);
INSERT INTO settings (key_, VALUE) VALUES ('PURGE_SCHEDULE', '0 0 0 ? * SUN *');
INSERT INTO settings (key_, VALUE) VALUES ('OFFSET_FETCH_INTERVAL', '30');
上面的注意事项弄完之后,我们就可以编译下载过来的源码:
# sbt package
编译的过程比较慢,有些依赖包下载速度非常地慢,请耐心等待。
最后,我们可以通过下面命令启动Kafka Web Console监控系统:
# sbt run
3、浏览器访问访问地址: http://ip:9000/