一、启动flink集群 bin/start-cluster.sh
二、启动sql-client.sh embedded -l libs/
三、配置文件:
添加表设置:tip:kafka版本 version: universal
tables:
- name: orders
type: source
update-mode: append
connector:
property-version: 1
type: kafka
version: universal
topic: ck1
startup-mode: latest-offset
properties:
- key: zookeeper.connect
value: hdp-1:2181
- key: bootstrap.servers
value: hdp-2:9092
- key: group.id
value: test-consumer-group
format:
property-version: 1
type: json
schema: "ROW(order_id LONG, shop_id VARCHAR, member_id LONG, trade_amt DOUBLE, pay_time TIMESTAMP)"
schema:
- name: order_id
type: LONG
- name: shop_id
type: VARCHAR
- name: member_id
type: LONG
- name: trade_amt
type: DOUBLE
- name: payment_time
type: TIMESTAMP
rowtime:
timestamps:
type: "from-field"
from: "pay_time"
watermarks:
type: "periodic-bounded"
delay: "60000"
四、下载flink-connecter-kafka的jar和flink json的jar放在sql_libs目录下
注意、flink-connecter的jar需要从官网下
https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/kafka.html
flink-json maven仓库下就可以
https://mvnrepository.com/artifact/org.apache.flink/flink-json/1.11.1
五、测试:
Kafka测试数据
{"order_id": "1","shop_id": "AF18","member_id": "3410211","trade_amt": "100.00","pay_time": "2021-01-19T09:45:00Z"}
{"order_id": "1","shop_id": "AF18","member_id": "3410211","trade_amt": "100.00","pay_time": "2021-01-19T09:45:01Z"}
{"order_id": "1","shop_id": "AF18","member_id": "3410211","trade_amt": "100.00","pay_time": "2021-01-19T09:45:02Z"}
{"order_id": "1","shop_id": "AF18","member_id": "3410211","trade_amt": "100.00","pay_time": "2021-01-19T09:45:03Z"}
{"order_id": "1","shop_id": "AF18","member_id": "3410211","trade_amt": "100.00","pay_time": "2021-01-19T09:45:04Z"}
flink-sql:
1、select * from orders;
2、
SELECT
shop_id
, TUMBLE_START(payment_time, INTERVAL '1' MINUTE) AS tumble_start
, TUMBLE_END(payment_time, INTERVAL '1' MINUTE) AS tumble_end
, sum(trade_amt) AS amt
FROM orders
GROUP BY shop_id, TUMBLE(payment_time, INTERVAL '1' MINUTE);