下面将会演示如何在 DDL 里面定义 Map、Array、Row 类型的数据,以及在 SQL 里面如何获里面的值。
数据格式如下:
{
"funcName": "test",
"data": {
"snapshots": [{
"content_type": "application/x-gzip-compressed-jpeg",
"url": "https://blog.csdn.net/xianpanjia4616"
}],
"audio": [{
"content_type": "audio/wav",
"url": " https://bss.csdn.net/m/topic/blog_star2020/detail?username=xianpanjia4616"
}]
},
"resultMap": {
"result": {
"cover": "/data/test/log.txt"
},
"isSuccess": true
},
"meta": {
"video_type": "normal"
},
"type": 2,
"timestamp": 1610549997263,
"arr": [{
"address": "北京市海淀区",
"city": "beijing"
}, {
"address": "北京市海淀区",
"city": "beijing"
}, {
"address": "北京市海淀区",
"city": "beijing"
}],
"map": {
"flink": 456
},
"doublemap": {
"inner_map": {
"key": 123
}
}
}
上面的数据包含了 Map、Array、Row 等类型, 对于这样的数据格式,在建表 DDL 里面应该如何定义呢?
定义 DDL
CREATE TABLE kafka_source (
funcName STRING,
data ROW<snapshots ARRAY<ROW<content_type STRING,url STRING>>,audio ARRAY<ROW<content_type STRING,url STRING>>>,
resultMap ROW<`result` MAP<STRING,STRING>,isSuccess BOOLEAN>,
meta MAP<STRING,STRING>,
`type` INT,
`timestamp` BIGINT,
arr ARRAY<ROW<address STRING,city STRING>>,
map MAP<STRING,INT>,
doublemap MAP<STRING,MAP<STRING,INT>>,
proctime as PROCTIME()
) WITH (
'connector' = 'kafka', -- 使用 kafka connector
'topic' = 'test', -- kafka topic
'properties.bootstrap.servers' = 'master:9092,storm1:9092,storm2:9092', -- broker连接信息
'properties.group.id' = 'jason_flink_test', -- 消费kafka的group_id
'scan.startup.mode' = 'latest-offset', -- 读取数据的位置
'format' = 'json', -- 数据源格式为 json
'json.fail-on-missing-field' = 'true', -- 字段丢失任务不失败
'json.ignore-parse-errors' = 'false' -- 解析失败跳过
)
解析 SQL
select
funcName,
doublemap['inner_map']['key'],
count(data.snapshots[1].url),
`type`,
TUMBLE_START(proctime, INTERVAL '30' second) as t_start
from kafka_source
group by TUMBLE(proctime, INTERVAL '30' second),funcName,`type`,doublemap['inner_map']['key']
SQL 运行的结果
4> (true,test,123,6,2,2021-01-15T03:31)
4> (false,test,123,6,2,2021-01-15T03:31)
4> (true,test,123,8,2,2021-01-15T03:31)
4> (false,test,123,8,2,2021-01-15T03:31)
4> (true,test,123,10,2,2021-01-15T03:31)
4> (false,test,123,10,2,2021-01-15T03:31)
4> (true,test,123,13,2,2021-01-15T03:31)
4> (false,test,123,13,2,2021-01-15T03:31)
4> (true,test,123,15,2,2021-01-15T03:31)
4> (true,test,123,3,2,2021-01-15T03:31:30)
下面演示insert进map、row等相关格式数据
以如下数据作为样例:
{
"id":1238123899121,
"name":"asdlkjasjkdla998y1122",
"date":"1990-10-14",
"obj":{
"time1":"12:12:43Z",
"str":"sfasfafs",
"lg":2324342345
},
"arr":[
{
"f1":"f1str11",
"f2":134
},
{
"f1":"f1str22",
"f2":555
}
],
"time":"12:12:43Z",
"timestamp":"1990-10-14T12:12:43Z",
"map":{
"flink":123
},
"mapinmap":{
"inner_map":{
"key":234
}
}
}
定义
CREATE TABLE json_source (
id BIGINT,
name STRING,
`date` DATE,
obj ROW<time1 TIME,str STRING,lg BIGINT>,
arr ARRAY<ROW<f1 STRING,f2 INT>>,
`time` TIME,
`timestamp` TIMESTAMP(3),
`map` MAP<STRING,BIGINT>,
mapinmap MAP<STRING,MAP<STRING,INT>>,
proctime as PROCTIME()
) WITH (
'connector.type' = 'kafka',
'connector.topic' = 'test',
'connector.properties.zookeeper.connect' = 'localhost:2181',
'connector.properties.bootstrap.servers' = 'localhost:9092',
'connector.properties.group.id' = 'testGroup',
'connector.version'='universal',
'format.type' = 'json',
'connector.startup-mode'='latest-offset'
);
获取
select
id,
name,
`date`,
obj.str,
arr [1].f1,
`map` ['flink'],
mapinmap ['inner_map'] ['key']
from
json_source;
构造
insert into
json_source
select
111 as id,
'name' as name,
Row(CURRENT_TIME, 'ss', 123) as obj,
Array [Row('f',1),Row('s',2)] as arr,
Map ['k1','v1','k2','v2'] as `map`,
Map ['inner_map',Map['k','v']] as mapinmap;
Flink与JSON数据类型映射
Flink SQL type | JSON type |
---|---|
CHAR / VARCHAR / STRING | string |
BOOLEAN | boolean |
BINARY / VARBINARY | string with encoding: base64 |
DECIMAL | number |
TINYINT | number |
SMALLINT | number |
INT | number |
BIGINT | number |
FLOAT | number |
DOUBLE | number |
DATE | string with format: date |
TIME | string with format: time |
TIMESTAMP | string with format: date-time |
TIMESTAMP_WITH_LOCAL_TIME_ZONE | string with format: date-time (with UTC time zone) |
INTERVAL | number |
ARRAY | array |
MAP / MULTISET | object |
ROW | object |
注意事项:
- Json 中的每个 {} 都需要用 Row 类型来表示
- map定义需要确定K、V类型,比如map<STRING,INT>,若{k1:v1,k2:'v2'....}中k/v有多有多种类型,需要用Row定义
- Json 中的每个 [] 都需要用 Arrary 类型来表示
- 数组的下标是从 1 开始的不是 0 如上面 SQL 中的 data.snapshots[1].url
- 关键字在任何地方都需要加反引号 如上面 SQL 中的 `type`
- select 语句中的字段类型和顺序一定要和结果表的字段类型和顺序保持一致
- UDF 可以直接在建表语句中使用