07 部署Indluxdb-telegraf

这篇博客详细介绍了如何部署InfluxDB和Telegraf,包括下载与解压、配置文件修改、创建Data目录、设置环境变量和启动文件。同时,提到了在Grafana中添加InfluxDB数据源的步骤,以及部署Chronograf的过程。此外,还提供了后续集群部署的参考资源。
摘要由CSDN通过智能技术生成

部署INFLUDB

提供者:MappleZF

版本:1.0.0

一、部署INFLUXDB

1 .1下载并解压
wget -c https://dl.influxdata.com/influxdb/releases/influxdb-1.8.3_linux_amd64.tar.gz
tar -xf influxdb-1.8.3_linux_amd64.tar.gz -C /opt/

1.2 修改配置文件
1.2.1修改influxdb.conf
[root@k8smaster01.host.com:/opt/influxdb/etc/influxdb]# vim influxdb.conf
### Welcome to the InfluxDB configuration file.

# The values in this file override the default values used by the system if
# a config option is not specified. The commented out lines are the configuration
# field and the default value used. Uncommenting a line and changing the value
# will change the value used at runtime when the process is restarted.

# Once every 24 hours InfluxDB will report usage data to usage.influxdata.com
# The data includes a random ID, os, arch, version, the number of series and other
# usage data. No data from user databases is ever transmitted.
# Change this option to true to disable reporting.
# 该选项用于上报influxdb的使用信息给InfluxData公司,默认值为false
# reporting-disabled = false

# Bind address to use for the RPC service for backup and restore.
# 绑定地址以用于RPC服务以进行备份和还原,默认配置是127.0.0.1:8088
# bind-address = "127.0.0.1:8088"

### 控制存储有关InfluxDB群集的元数据的Raft共识组的参数
### [meta]
###
### Controls the parameters for the Raft consensus group that stores metadata
### about the InfluxDB cluster.
###

[meta]
  # Where the metadata/raft database is stored
  # 存储元数据/ raft数据库的目录,默认值:/var/lib/influxdb/meta
  dir = "/var/lib/influxdb/meta"

  # Automatically create a default retention policy when creating a database.
  # 用于控制默认存储策略,数据库创建时,会自动生成autogen的存储策略,默认值:true。
  # retention-autocreate = true

  # If log messages are printed for the meta service
  #  为元服务打印日志消息,默认值:true
  # logging-enabled = true

### 控制InfluxDB的实际分片数据的生存位置以及它从WAL中刷新的方式。 “dir”可能需要更改为适合您系统的位置,但WAL设置是高级配置。 默认值应适用于大多数系统。
### [data]
###
### Controls where the actual shard data for InfluxDB lives and how it is
### flushed from the WAL. "dir" may need to be changed to a suitable place
### for your system, but the WAL settings are an advanced configuration. The
### defaults should work for most systems.
###

[data]
  # The directory where the TSM storage engine stores TSM files.
  # 最终数据(TSM文件)存储目录,默认值:/var/lib/influxdb/data
  dir = "/mnt/cephfs/influxdb/data"

  # The directory where the TSM storage engine stores WAL files.
  # 预写日志存储目录,默认值:/var/lib/influxdb/wal
  wal-dir = "/var/lib/influxdb/wal"

  # The amount of time that a write will wait before fsyncing.  A duration
  # greater than 0 can be used to batch up multiple fsync calls.  This is useful for slower
  # disks or when WAL write contention is seen.  A value of 0s fsyncs every write to the WAL.
  # Values in the range of 0-100ms are recommended for non-SSD disks.
  # 写入在fsyncing之前等待的时间.持续时间大于0可用于批量处理多个fsync调用.这对于较慢的磁盘或看到WAL写入争用时很有用.每次写入WAL时值为0s fsyncs.对于非SSD磁盘,建议使用0-100ms范围内的值
  # wal-fsync-delay = "0s"


  # The type of shard index to use for new shards.  The default is an in-memory index that is
  # recreated at startup.  A value of "tsi1" will use a disk based index that supports higher
  # cardinality datasets.
  # 用于新分片的分片索引的类型。 默认值是在启动时重新创建的内存中索引。 值“tsi1”将使用支持更高的基于磁盘的索引基数数据集
  index-version = "tsi1"

  # Trace logging provides more verbose output around the tsm engine. Turning
  # this on can provide more useful output for debugging tsm engine issues.
  # 是否开启跟踪(trace)日志,默认值:false。
  # trace-logging-enabled = false

  # Whether queries should be logged before execution. Very useful for troubleshooting, but will
  # log any sensitive data contained within a query.
  # 是否开启tsm引擎查询日志,默认值: true
  # query-log-enabled = true

  # Validates incoming writes to ensure keys only have valid unicode characters.
  # This setting will incur a small overhead because every key must be checked.
  # 验证传入的写入以确保密钥仅具有有效的unicode字符。 此设置将产生很小的开销,因为必须检查每个密钥,默认值false
  # validate-keys = false

  # Settings for the TSM engine

  # CacheMaxMemorySize is the maximum size a shard's cache can
  # reach before it starts rejecting writes.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # 用于限定shard最大值,大于该值时会拒绝写入,默认值:1GB
  cache-max-memory-size = "4g"

  # CacheSnapshotMemorySize is the size at which the engine will
  # snapshot the cache and write it to a TSM file, freeing up memory
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # 用于设置快照大小,大于该值时数据会刷新到tsm文件,默认值:25MB
  # cache-snapshot-memory-size = "25m"

  # CacheSnapshotWriteColdDuration is the length of time at
  # which the engine will snapshot the cache and write it to
  # a new TSM file if the shard hasn't received writes or deletes
  # tsm1引擎 snapshot(快照)写盘延迟,默认值:10m
  # cache-snapshot-write-cold-duration = "10m"

  # CompactFullWriteColdDuration is the duration at which the engine
  # will compact all TSM files in a shard if it hasn't received a
  # write or delete
  # tsm文件在压缩前可以存储的最大时间,默认值:4h
  # compact-full-write-cold-duration = "4h"

  # The maximum number of concurrent full and level compactions that can run at one time.  A
  # value of 0 results in 50% of runtime.GOMAXPROCS(0) used at runtime.  Any number greater
  # than 0 limits compactions to that value.  This setting does not apply
  # to cache snapshotting.
  # 可以一次运行的最大并发完全和级别压缩数。 值为0会导致运行时使用50%运行时.GOMAXPROCS(0)。 任何大于0的数字都会限制对该值的压缩。 此设置不适用于缓存快照,默认值:0
  # max-concurrent-compactions = 0

  # CompactThroughput is the rate limit in bytes per second that we
  # will allow TSM compactions to write to disk. Note that short bursts are allowed
  # to happen at a possibly larger value, set by CompactThroughputBurst
  # 是我们允许TSM压缩写入磁盘的速率限制(以字节/秒为单位)。 请注意,短脉冲串允许以可能更大的值发生,由Compact-Throughput-Burst设置,默认值:48m
  # compact-throughput = "48m"

  # CompactThroughputBurst is the rate limit in bytes per second that we
  # will allow TSM compactions to write to disk.
  # 是我们允许TSM压缩写入磁盘的速率限制,以每秒字节数为单位,默认值:48m
  # compact-throughput-burst = "48m"

  # If true, then the mmap advise value MADV_WILLNEED will be provided to the kernel with respect to
  # TSM files. This setting has been found to be problematic on some kernels, and defaults to off.
  # It might help users who have slow disks in some cases.
  # 如果为true,则将针对TSM文件向内核提供mmap建议值MADV_WILLNEED。 已发现此设置在某些内核上存在问题,默认值:false。 在某些情况下,它可能会帮助磁盘速度较慢的用户
  # tsm-use-madv-willneed = false

  # Settings for the inmem index

  # The maximum series allowed per database before writes are dropped.  This limit can prevent
  # high cardinality issues at the database level.  This limit can be disabled by setting it to
  # 0.
  # 限制数据库的级数,该值为0时取消限制,默认值:1000000
  max-series-per-database = 10000000

  # The maximum number of tag values per tag that are allowed before writes are dropped.  This limit
  # can prevent high cardinality tag values from being written to a measurement.  This limit can be
  # disabled by setting it to 0.
  # 一个tag最大的value数,0取消限制,默认值:100000
  max-values-per-tag = 1000000

  # Settings for the tsi1 index

  # The threshold, in bytes, when an index write-ahead log file will compact
  # into an index file. Lower sizes will cause log files to be compacted more
  # quickly and result in lower heap usage at the expense of write throughput.
  # Higher sizes will be compacted less frequently, store more series in-memory,
  # and provide higher write throughput.
  # Valid size suffixes are k, m, or g (case insensitive, 1024 = 1k).
  # Values without a size suffix are in bytes.
  # 索引预写日志文件压缩到索引文件中时的阈值(以字节为单位)。 较小的大小将导致日志文件更快地压缩,并导致较低的堆使用量,但代价是写入吞吐量。
  # 更高的大小将更少压缩,在内存中存储更多系列,并提供更高的写入吞吐量。有效大小的后缀为k,m或g(不区分大小写,1024 = 1k)。没有大小后缀的值以字节为单位,默认值:1m
  # max-index-log-file-size = "1m"

  # The size of the internal cache used in the TSI index to store previously 
  # calculated series results. Cached 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Telegraf是一种使用Go语言编写的数据收集代理,可以从各种不同来源收集不同类型的数据,并将其发送到各种不同的目标。它支持多种输入插件和输出插件,可以轻松地将数据传输到各种目标,如InfluxDB、Elasticsearch、Kafka等。 容器化Telegraf是一个值得推荐的做法,因为它可以使Telegraf部署和管理更加方便,同时也可以更容易地扩展和升级。以下是容器部署Telegraf的步骤: 第一步:创建Dockerfile 首先,需要创建一个Dockerfile,以指定将Telegraf打包为Docker镜像的方式。以下是一个基本的Dockerfile示例: ``` FROM telegraf:1.18 COPY telegraf.conf /etc/telegraf/telegraf.conf ``` 其中,FROM指定了基础镜像,这里使用的是Telegraf1.18的官方镜像。COPY命令将在本地文件系统中的Telegraf配置文件复制到容器内的/etc/telegraf/telegraf.conf目录下。 第二步:构建镜像 使用以下命令下载Telegraf镜像并构建新镜像: ``` docker build -t my-telegraf . ``` 其中,my-telegraf是新构建的镜像的名称。 第三步:运行容器 使用以下命令启动一个新容器: ``` docker run -d --name my-telegraf-container my-telegraf ``` 其中,-d选项指定将容器作为后台进程运行,--name选项指定容器的名称。 第四步:检查Telegraf是否正在运行 通过以下命令查看新容器运行情况: ``` docker ps ``` 如果一切正常,将会显示正在运行的容器。 最后需要注意的是,需要根据自己的需求和环境调整和配置Telegraf的配置文件,以及配置Telegraf与目标数据源或目标数据库之间的连接。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值