随着物联网的广泛应用,时序数据库在软件开发中越来越多的被采用,而InfluxDB作为市场排名第一的时序数据库,基本都是时序数据库的首选。
最近接手一个项目,客户当前的解决方案是使用一个Connector收Kafka中的特定Topic消息,解析后转存入InluxDB。 从这个可以看出客户并不是很熟悉InluxDB的应用,我给出的改进方案就是在Kafka和InfluxDB中间加入Telegraf,Kafka作为Telegraf的Plugin,这样就把收Kafka消息再解析转存的任务交给Telegraf了。
只需要改一下Telegraf的配置,如下:
vi /etc/telegraf/telegraf.conf
# Configuration for sending metrics to InfluxDB
[[outputs.influxdb]]
## The full HTTP or UDP URL for your InfluxDB instance.
##
## Multiple URLs can be specified for a single cluster, only ONE of the
## urls will be written to each interval.
# urls = ["unix:///var/run/influxdb.sock"]
# urls = ["udp://127.0.0.1:8089"]
urls = ["http://127.0.0.1:8086"]
## The target database for metrics; will be created as needed.
## For UDP url endpoint database needs to be configured on server side.
database = "telegraf"
## The value of this tag will be used to determine the database. If this
## tag is not set the 'database' option is used as the default.
# database_tag = ""
## If true, no CREATE DATABASE queries will be sent. Set to true when using
## Telegraf with a user without permissions to create databases or when the
## database already exists.
# skip_database_creation = false
## Name of existing retention policy to write to. Empty string writes to
## the default retention policy. Only takes effect when using HTTP.
# retention_policy = ""
## Write consistency (clusters only), can be: "any", "one", "quorum", "all".