Zabbix自带模板监控MySQL服务

Zabbix的服务端与客户端的安装这里不再赘述了,前面也有相应的文章介绍过了,感兴趣的伙伴们可以看看历史文章就可以了,今天主要介绍下如何利用zabbix自带的模板来监控MySQL服务的一些状态,同时通过图形化界面直观看出MYSQL服务各个时间段的运行情况。

01配置Zabbix_agent客户端

cd /etc/zabbix/
vim zabbix_agentd.conf
UserParameter=mysql.status[*],/etc/zabbix/chkmysql.sh$1
UserParameter=mysql.ping,netstat-ntpl|grep 3306 |grep mysql |wc |awk '{print $1}'
UserParameter=mysql.version,mysql –V
###在配置文件最后一行新增上述配置即可

重启服务

/etc/init.d/zabbix_agent restart

注:chkmysql.sh脚本内容这里就不贴上来了,如果有需要的可以一起讨论下,每个人写的思路都不一样

02服务端测试配置

服务器端利用自带的命令zabbix_get测试是否获取到数据

[root@zabbix-server zabbix]# zabbix_get -s mysql-slave -k mysql.status[Uptime]
720757
[root@zabbix-server zabbix]# zabbix_get -s mysql-slave -k mysql.status[Bytes_sent]
1431240816

如果能正确并获取到相关的数据,表明配置是正确的,接下来就是要登陆WEB界面进行其它配置。

03WEB界面配置

WEB界面配置其实就分为以下四大步骤
1、创建主机,关联模板
打开WEB,配置——主机——创建主机(如下图)
在这里插入图片描述
填写主机名称(主机名称是就客户端主机名与可见名称可以相同),选择组、填写客户端地址,然后选择模板(如下图)
在这里插入图片描述
完成配置后,在主机项下面就可以看到下图所示
在这里插入图片描述

如果配置完成后,在主机界面看到有报错信息,需要调整下客户端相关的命令权限,操作如下 chmod +s /bin/netstat
此配置就是使普通用户执行特权命令,或者说给个某个命令、程序或服务、脚本以suid权限

2、配置监控项
3、配置触发器
由于使用自带的模板,监控项与触发器都已默认配置好了,需要修改的可以按需求修改
4、创建图形展示界面
创建图形如下图
在这里插入图片描述
点击创建好的图形——编辑,选择相对应项的监控项即可 完成配置
在这里插入图片描述

04最终图形界面展示

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

zabbix监控数据库-shell脚本

1、在zabbix-agent端添加键值

vim /etc/zabbix/zabbix_agentd.d/mystat.conf

UserParameter=mystat[*],/server/scripts/chk_mysql.sh '$1'  # 键值是mystat[] 

2、服务端命令行测试键值

[root@zabbix ~]# zabbix_get -s 172.16.1.51 -k mystat[Uptime]    
496

3、编写脚本

[root@db01 log]# cat /server/scripts/chk_mysql.sh

#!/bin/bash
# -------------------------------------------------------------------------------
# FileName:    check_mysql.sh
# Revision:    1.0
# Date:        2018/01/31
# Author:      chunk
# Email:       
# Website:     
# Description: 
# Notes:       ~
# -------------------------------------------------------------------------------
# Copyright:   
# License:     GPL

用户名
MYSQL_USER='root'

密码
MYSQL_PWD='oldboy123'

主机地址/IP
MYSQL_HOST='10.0.0.51'

端口
MYSQL_PORT='3306'

数据连接
MYSQL_CONN="/usr/bin/mysqladmin -u${MYSQL_USER} -p${MYSQL_PWD} -h${MYSQL_HOST} -P${MYSQL_PORT}"

参数是否正确
if [ $# -ne "1" ];then 
    echo "arg error!" 
fi 

获取数据
case $1 in 
    Uptime) 
        result=`${MYSQL_CONN} status|cut -f2 -d":"|cut -f1 -d"T"` 
        echo $result 
        ;; 
    Com_update) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_update"|cut -d"|" -f3` 
        echo $result 
        ;; 
    Slow_queries) 
        result=`${MYSQL_CONN} status |cut -f5 -d":"|cut -f1 -d"O"` 
        echo $result 
        ;; 
    Com_select) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_select"|cut -d"|" -f3` 
        echo $result 
                ;; 
    Com_rollback) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_rollback"|cut -d"|" -f3` 
                echo $result 
                ;; 
    Questions) 
        result=`${MYSQL_CONN} status|cut -f4 -d":"|cut -f1 -d"S"` 
                echo $result 
                ;; 
    Com_insert) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_insert"|cut -d"|" -f3` 
                echo $result 
                ;; 
    Com_delete) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_delete"|cut -d"|" -f3` 
                echo $result 
                ;; 
    Com_commit) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_commit"|cut -d"|" -f3` 
                echo $result 
                ;; 
    Bytes_sent) 
        result=`${MYSQL_CONN} extended-status |grep -w "Bytes_sent" |cut -d"|" -f3` 
                echo $result 
                ;; 
    Bytes_received) 
        result=`${MYSQL_CONN} extended-status |grep -w "Bytes_received" |cut -d"|" -f3` 
                echo $result 
                ;; 
    Com_begin) 
        result=`${MYSQL_CONN} extended-status |grep -w "Com_begin"|cut -d"|" -f3` 
                echo $result 
                ;; 

        *) 
        echo "Usage:$0(Uptime|Com_update|Slow_queries|Com_select|Com_rollback|Questions|Com_insert|Com_delete|Com_commit|Bytes_sent|Bytes_received|Com_begin)" 
        ;; 
esac

4、web页面操作添加监控项
二, 判断端口是否存活,
1,方法:一:映射值:
2 ,方法 不用值映射
创建触发器

还可以添加远程命令 mysql挂了重启,重启不成功,发送邮件

前提,要改客户端2个配置文件

zabbix-agent配置文件中开启远程命令

EnableRemoteCommands=1

visudo 中添加zabbix用户免密
之前测试这样写成功启动myslq ,启动nginx就失败不知道怎么回事

visudo
## Same thing without a password
#%wheel ALL=(ALL)       NOPASSWD: ALL
zabbix  ALL=(ALL)       NOPASSWD: ALL

停止nginx,没有远程执行命令

[root@web01 ~]# systemctl stop nginx

查看日志

Jan 12 04:03:42 web01 sudo: pam_unix(sudo:auth): conversation failed
Jan 12 04:03:42 web01 sudo: pam_unix(sudo:auth): auth could not identify password for [zabbix]
Jan 12 04:03:42 web01 sudo: pam_succeed_if(sudo:auth): requirement "uid >= 1000" not met by user "zabbix"

后来改了visudo文件

[root@web01 ~]# visudo 
## Same thing without a password
#%wheel ALL=(ALL)       NOPASSWD: ALL
zabbix ALL=NOPASSWD: ALL

查看日志,远程支持命令成功
[root@web01 ~]# tail -f /var/log/secure
ticationAgent, locale en_US.UTF-8) (disconnected from bus)Jan 12 15:04:26 web01 sudo: zabbix : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/systemctl restart nginx.service

配置远程命令
此时数据库stop 会zabbix会远程重启数据库, 重启不成功会发邮件提醒

文章转载:https://www.cnblogs.com/john5yang/p/10252685.html

zabbix4.0 之mysql优化(Zabbix分区表)

zabbix最大的瓶颈不在zabbix服务,而是mysql数据库的压力上,优化mysql其实就是优化zabbix的配置了。
zabbix数据库常见的优化处理方法有两种:

  • 清空数据库中history,history_uint,trends_uint中的数据(这种方式非常耗费时间)。
  • 使用MySQL表分区来对history这种大表进行分区,但是一定要在数据量小的时候进行分区,当数据量达到好几十G设置几百G了还是采用第一种方法把数据清理了再作表分区。

分表操作如下:

相关知识点:

MySQL的表分区不支持外键。Zabbix2.0以上history和trend相关的表没有使用外键,因此可以使用分区。
MySQL表分区就是将一个大表在逻辑上切分成好几个物理分片。使用MySQL表分区有以下几个好处:
1.在有些场景下可以明显增加查询性能,特别是对于那些重度使用的表如果是一个单独的分区或者好几个分区就可以明显增加查询性能,因为比起加载整张表的数据到内存,一个分区的数据和索引更容易加载到内存。查看zabbix数据的general日志,可以发现zabbix对于history相关的几张表调用是非常频繁的,所以如果要优化zabbix的数据库重点要优化history这几张大表。
2.如果查询或者更新主要是使用一个分区,那么性能提升就可以简单地通过顺序访问磁盘上的这个分区而不用使用索引和随机访问整张表。
 - 批量插入和删除执行的时候可以简单地删除或者增加分区,只要当创建分区的时候有计划的创建。ALTER TABLE操作也会很快
4.Housekeeper对于某些数据类型不在需要了。可以通过Administration->General->Housekeeping来关闭不需要的数据类型的housekeeping。比如关闭History类的housekeeping
5.当创建增加新的分区时,确保分区范围没有越界,要不然会返回错误
  一个MySQL表要么完全被分区,要么一点也不要被分区。
  当尝试对一个表进行大量分区时,增大open_files_limit的值
  被分区的表都不支持外键,在进行分区之前需要删除外键
  被分区的表不支持查询缓存

1. 查看表占用空间情况

select table_name, (data_length+index_length)/1024/1024 as total_mb, table_rows from information_schema.tables where table_schema='zabbix';

在这里插入图片描述

2.Zabbix大表:history,history_log,history_str,history_text,history_uint,trends,trends_uint

共有四个存储过程

  • partition_create - 这将在给定模式中的给定表上创建一个分区。
  • partition_drop - 这将删除给定模式中给定表上给定时间戳的分区。
  • partition_maintenance - 此功能是用户调用的。它负责解析给定的参数,然后根据需要创建/删除分区。
  • partition_verify - 检查给定模式中给定表上是否启用了分区。如果没有启用,它将创建一个单独的分区。

具体的脚本如下:

DELIMITER $$
CREATE PROCEDURE `partition_create`(SCHEMANAME varchar(64), TABLENAME varchar(64), PARTITIONNAME varchar(64), CLOCK int)
BEGIN
        /*
           SCHEMANAME = The DB schema in which to make changes
           TABLENAME = The table with partitions to potentially delete
           PARTITIONNAME = The name of the partition to create
        */
        /*
           Verify that the partition does not already exist
        */

        DECLARE RETROWS INT;
        SELECT COUNT(1) INTO RETROWS
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_description >= CLOCK;

        IF RETROWS = 0 THEN
        /*
           1. Print a message indicating that a partition was created.
           2. Create the SQL to create the partition.
           3. Execute the SQL from #2.
        */
        SELECT CONCAT( "partition_create(", SCHEMANAME, ",", TABLENAME, ",", PARTITIONNAME, ",", CLOCK, ")" ) AS msg;
        SET @sql = CONCAT( 'ALTER TABLE ', SCHEMANAME, '.', TABLENAME, ' ADD PARTITION (PARTITION ', PARTITIONNAME, ' VALUES LESS THAN (', CLOCK, '));' );
        PREPARE STMT FROM @sql;
        EXECUTE STMT;
        DEALLOCATE PREPARE STMT;
        END IF;
END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_drop`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), DELETE_BELOW_PARTITION_DATE BIGINT)
BEGIN
        /*
           SCHEMANAME = The DB schema in which to make changes
           TABLENAME = The table with partitions to potentially delete
           DELETE_BELOW_PARTITION_DATE = Delete any partitions with names that are dates older than this one (yyyy-mm-dd)
        */
        DECLARE done INT DEFAULT FALSE;
        DECLARE drop_part_name VARCHAR(16);

        /*
           Get a list of all the partitions that are older than the date
           in DELETE_BELOW_PARTITION_DATE.  All partitions are prefixed with
           a "p", so use SUBSTRING TO get rid of that character.
        */
        DECLARE myCursor CURSOR FOR
        SELECT partition_name
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND CAST(SUBSTRING(partition_name FROM 2) AS UNSIGNED) < DELETE_BELOW_PARTITION_DATE;
        DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;

        /*
           Create the basics for when we need to drop the partition.  Also, create
           @drop_partitions to hold a comma-delimited list of all partitions that
           should be deleted.
        */
        SET @alter_header = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " DROP PARTITION ");
        SET @drop_partitions = "";

        /*
           Start looping through all the partitions that are too old.
        */
        OPEN myCursor;
        read_loop: LOOP
        FETCH myCursor INTO drop_part_name;
        IF done THEN
    LEAVE read_loop;
        END IF;
        SET @drop_partitions = IF(@drop_partitions = "", drop_part_name, CONCAT(@drop_partitions, ",", drop_part_name));
        END LOOP;
        IF @drop_partitions != "" THEN
        /*
           1. Build the SQL to drop all the necessary partitions.
           2. Run the SQL to drop the partitions.
           3. Print out the table partitions that were deleted.
        */
        SET @full_sql = CONCAT(@alter_header, @drop_partitions, ";");
        PREPARE STMT FROM @full_sql;
        EXECUTE STMT;
        DEALLOCATE PREPARE STMT;

        SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, @drop_partitions AS `partitions_deleted`;
        ELSE
        /*
           No partitions are being deleted, so print out "N/A" (Not applicable) to indicate
           that no changes were made.
        */
        SELECT CONCAT(SCHEMANAME, ".", TABLENAME) AS `table`, "N/A" AS `partitions_deleted`;
        END IF;
END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_maintenance`(SCHEMA_NAME VARCHAR(32), TABLE_NAME VARCHAR(32), KEEP_DATA_DAYS INT, HOURLY_INTERVAL INT, CREATE_NEXT_INTERVALS INT)
BEGIN
        DECLARE OLDER_THAN_PARTITION_DATE VARCHAR(16);
        DECLARE PARTITION_NAME VARCHAR(16);
        DECLARE OLD_PARTITION_NAME VARCHAR(16);
        DECLARE LESS_THAN_TIMESTAMP INT;
        DECLARE CUR_TIME INT;

        CALL partition_verify(SCHEMA_NAME, TABLE_NAME, HOURLY_INTERVAL);
        SET CUR_TIME = UNIX_TIMESTAMP(DATE_FORMAT(NOW(), '%Y-%m-%d 00:00:00'));

        SET @__interval = 1;
        create_loop: LOOP
        IF @__interval > CREATE_NEXT_INTERVALS THEN
    LEAVE create_loop;
        END IF;

        SET LESS_THAN_TIMESTAMP = CUR_TIME + (HOURLY_INTERVAL * @__interval * 3600);
        SET PARTITION_NAME = FROM_UNIXTIME(CUR_TIME + HOURLY_INTERVAL * (@__interval - 1) * 3600, 'p%Y%m%d%H00');
        IF(PARTITION_NAME != OLD_PARTITION_NAME) THEN
    CALL partition_create(SCHEMA_NAME, TABLE_NAME, PARTITION_NAME, LESS_THAN_TIMESTAMP);
        END IF;
        SET @__interval=@__interval+1;
        SET OLD_PARTITION_NAME = PARTITION_NAME;
        END LOOP;

        SET OLDER_THAN_PARTITION_DATE=DATE_FORMAT(DATE_SUB(NOW(), INTERVAL KEEP_DATA_DAYS DAY), '%Y%m%d0000');
        CALL partition_drop(SCHEMA_NAME, TABLE_NAME, OLDER_THAN_PARTITION_DATE);

END$$
DELIMITER ;
DELIMITER $$
CREATE PROCEDURE `partition_verify`(SCHEMANAME VARCHAR(64), TABLENAME VARCHAR(64), HOURLYINTERVAL INT(11))
BEGIN
        DECLARE PARTITION_NAME VARCHAR(16);
        DECLARE RETROWS INT(11);
        DECLARE FUTURE_TIMESTAMP TIMESTAMP;

        /*
         * Check if any partitions exist for the given SCHEMANAME.TABLENAME.
         */
        SELECT COUNT(1) INTO RETROWS
        FROM information_schema.partitions
        WHERE table_schema = SCHEMANAME AND table_name = TABLENAME AND partition_name IS NULL;

        /*
         * If partitions do not exist, go ahead and partition the table
         */
        IF RETROWS = 1 THEN
        /*
         * Take the current date at 00:00:00 and add HOURLYINTERVAL to it.  This is the timestamp below which we will store values.
         * We begin partitioning based on the beginning of a day.  This is because we don't want to generate a random partition
         * that won't necessarily fall in line with the desired partition naming (ie: if the hour interval is 24 hours, we could
         * end up creating a partition now named "p201403270600" when all other partitions will be like "p201403280000").
         */
        SET FUTURE_TIMESTAMP = TIMESTAMPADD(HOUR, HOURLYINTERVAL, CONCAT(CURDATE(), " ", '00:00:00'));
        SET PARTITION_NAME = DATE_FORMAT(CURDATE(), 'p%Y%m%d%H00');

        -- Create the partitioning query
        SET @__PARTITION_SQL = CONCAT("ALTER TABLE ", SCHEMANAME, ".", TABLENAME, " PARTITION BY RANGE(`clock`)");
        SET @__PARTITION_SQL = CONCAT(@__PARTITION_SQL, "(PARTITION ", PARTITION_NAME, " VALUES LESS THAN (", UNIX_TIMESTAMP(FUTURE_TIMESTAMP), "));");

        -- Run the partitioning query
        PREPARE STMT FROM @__PARTITION_SQL;
        EXECUTE STMT;
        DEALLOCATE PREPARE STMT;
        END IF;
END$$
DELIMITER ;

DELIMITER $$
CREATE PROCEDURE`partition_maintenance_all`(SCHEMA_NAME VARCHAR(32))
BEGIN
               CALL partition_maintenance(SCHEMA_NAME, 'history', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_log', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_str', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_text', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'history_uint', 90, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'trends', 730, 24, 14);
               CALL partition_maintenance(SCHEMA_NAME, 'trends_uint', 730, 24, 14);
END$$
DELIMITER ;

3.上面内容包含了创建分区的存储过程,将上面内容复制到partition.sql中

a.然后执行如下
mysql  -uzabbix -pzabbix zabbix  < partition.sql
b.添加crontable,每天执行01点01分执行,如下:
crontab  -l > crontab.txt 
cat >> crontab.txt <<EOF
#zabbix partition_maintenance
01 01 * * *  mysql  -uzabbix -pzabbix zabbix -e"CALL partition_maintenance_all('zabbix')" &>/dev/null
EOF
cat crontab.txt |crontab
注意: mysql的zabbix用户的密码部分按照实际环境配置
c.首先执行一次(由于首次执行的时间较长,请使用nohup执行),如下:
nohup   mysql -uzabbix -pzabbix zabbix -e "CALLpartition_maintenance_all('zabbix')" &> /root/partition.log&  

4.查看分区是否成功

1.查看分区情况
mysql> show create table history_uint;
| history_uint | CREATE TABLE `history_uint` (
  `itemid` bigint(20) unsigned NOT NULL,
  `clock` int(11) NOT NULL DEFAULT '0',
  `value` bigint(20) unsigned NOT NULL DEFAULT '0',
  `ns` int(11) NOT NULL DEFAULT '0',
  KEY `history_uint_1` (`itemid`,`clock`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin
/*!50100 PARTITION BY RANGE (`clock`)
(PARTITION p201904160000 VALUES LESS THAN (1555430400) ENGINE = InnoDB,
 PARTITION p201904170000 VALUES LESS THAN (1555516800) ENGINE = InnoDB,
 PARTITION p201904180000 VALUES LESS THAN (1555603200) ENGINE = InnoDB,
 PARTITION p201904190000 VALUES LESS THAN (1555689600) ENGINE = InnoDB,
 PARTITION p201904200000 VALUES LESS THAN (1555776000) ENGINE = InnoDB,
 PARTITION p201904210000 VALUES LESS THAN (1555862400) ENGINE = InnoDB,
 PARTITION p201904220000 VALUES LESS THAN (1555948800) ENGINE = InnoDB,
 PARTITION p201904230000 VALUES LESS THAN (1556035200) ENGINE = InnoDB,
 PARTITION p201904240000 VALUES LESS THAN (1556121600) ENGINE = InnoDB,
 PARTITION p201904250000 VALUES LESS THAN (1556208000) ENGINE = InnoDB,
 PARTITION p201904260000 VALUES LESS THAN (1556294400) ENGINE = InnoDB,
 PARTITION p201904270000 VALUES LESS THAN (1556380800) ENGINE = InnoDB,
 PARTITION p201904280000 VALUES LESS THAN (1556467200) ENGINE = InnoDB,
 PARTITION p201904290000 VALUES LESS THAN (1556553600) ENGINE = InnoDB) */ |
1 row in set (0.00 sec)
2.查看MYSQL目录下的表文件,可以看到经过分区后的表的数据库文件由原来打个ibd文件变成了按照日期划分的多个ibd文件
[root@VM_0_3_centos zabbix]# pwd
/data/mysql/zabbix
[root@VM_0_3_centos zabbix]# ls -lh|grep history_uint
-rw-r----- 1 mysql mysql 8.5K Apr 16 16:16 history_uint.frm
-rw-r----- 1 mysql mysql 124M Apr 16 17:11 history_uint#P#p201904160000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904170000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904180000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904190000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904200000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904210000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904220000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904230000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904240000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904250000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904260000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904270000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904280000.ibd
-rw-r----- 1 mysql mysql 112K Apr 16 16:16 history_uint#P#p201904290000.ibd

5.界面设置,管理----一般----设置—管家

在这里插入图片描述

6.mysql优化

a.修改单独表空间

  vi /etc/my.cnf
  在[mysqld]下设置
  innodb_file_per_table=1
参考:https://blog.51cto.com/yuweibing/1656425

b.增大innodb_log_file_size的方法
zabbix在使用数据库的过程中,特别是删除历史数据的过程中,会涉及到大数据操作,如果逻辑日志文件太小,会造成执行不成功,日志回滚的问题

编辑my.cnf  ,  增加   innodb_log_file_size=20M

7 问题

运行一段时间后发i西安zabbix server端服务器下面多个agent出现报警:Zabbix agent on zabbix server is unreachable for 5 minutes
查看zabbix server日志如下

 32757:20190430:095456.588 [Z3005] query failed: [1526] Table has no partition for value 1556589295 [insert into history_uint (itemid,clock,ns,value) values (29578,1556589295,330422837,0);
]
 32757:20190430:095457.602 [Z3005] query failed: [1526] Table has no partition for value 1556589297 [insert into history_uint (itemid,clock,ns,value) values (29274,1556589297,446885606,200),(29270,1556589297,447245729,0),(29457,1556589297,493976343,0),(29517,1556589297,498202149,1);
]
 32758:20190430:095458.603 [Z3005] query failed: [1526] Table has no partition for value 1556589298 [insert into history_uint (itemid,clock,ns,value) values (29518,1556589298,496847550,154),(29675,1556589298,502218685,200),(29671,1556589298,502513014,0);
]
 32751:20190430:104927.004 executing housekeeper
 32751:20190430:104927.018 housekeeper [deleted 0 hist/trends, 0 items/triggers, 0 events, 0 problems, 0 sessions, 0 alarms, 0 audit items in 0.013509 sec, idle for 1 hour(s)]

提示sql插不进去对应的表,,,
query failed: [1526] Table has no partition 没有对应发分区,由于次数据库已经对Histroy,history_log,history_str,History_txt等历史记录表进行了表分区,并添加了计划任务
正常每天都会创建一天,不会出现这样的为你,出现上述日志表明分区表未被创建,所以手动执行一下恢复正常。
检查问题:
计划任务内容写的有问题,没有正常执行,更正即可

参考:
https://cloud.tencent.com/developer/article/1006301
https://www.kancloud.cn/devops-centos/centos-linux-devops/375488
https://zabbix.org/wiki/Docs/howto/MySQL_Table_Partitioning_(variant)

参考链接:
Zabbix自带模板监控MySQL服务 :
https://mp.weixin.qq.com/s/5cXZvPcMijS4fbz6GtEuPw

zabbix监控mysql : https://www.cnblogs.com/lovelinux199075/p/9015530.html

zabbix监控数据库-shell脚本 :https://www.jianshu.com/p/d5c9fe5478c6

zabbix4.0 之mysql优化(Zabbix分区表) : https://www.jianshu.com/p/b6b5b5377c9b

Zabbix数据库优化(Oracle表分区) :https://www.jianshu.com/p/900eac50c097

mysql分区 : https://www.jianshu.com/p/b6b5b5377c9b

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值