mysql分区

写在前面

数据量较大时,将一张表的数据按照一定条件进行分区保存,可以优化查询。一般Oracle、Hive中分区用得更多一些。

分区逻辑上就只有一个表或者一个索引

1、垂直分区(按列分区)

创建表part_tab,按日期的年份拆分

CREATE TABLE part_tab (
 c1 int default NULL, 
 c2 varchar(30) default NULL, 
 c3 date default NULL
) engine=myisam 
PARTITION BY RANGE (year(c3)) (PARTITION p0 VALUES LESS THAN (1995),
PARTITION p1 VALUES LESS THAN (1996) , PARTITION p2 VALUES LESS THAN (1997) ,
PARTITION p3 VALUES LESS THAN (1998) , PARTITION p4 VALUES LESS THAN (1999) ,
PARTITION p5 VALUES LESS THAN (2000) , PARTITION p6 VALUES LESS THAN (2001) ,
PARTITION p7 VALUES LESS THAN (2002) , PARTITION p8 VALUES LESS THAN (2003) ,
PARTITION p9 VALUES LESS THAN (2004) , PARTITION p10 VALUES LESS THAN (2010),
PARTITION p11 VALUES LESS THAN MAXVALUE );
关键字作用
PARTITION BY分区函数
RANGE根据范围分区,除了可以有Hash(哈希)、Key(键值)、List(预定义列表)、Composite(复合模式)
PARTITION p0p0 分区名
VALUES LESS THAN (1995)值小于1995的在p0
PARTITION p11 VALUES LESS THAN MAXVALUE更大的值放到p11分区中

1.1、RANGE 类型

CREATE TABLE users (
       uid INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
       name VARCHAR(30) NOT NULL DEFAULT '',
       email VARCHAR(30) NOT NULL DEFAULT ''
)
PARTITION BY RANGE (uid) (
       PARTITION p0 VALUES LESS THAN (3000000)
       DATA DIRECTORY = '/data0/data'
       INDEX DIRECTORY = '/data1/idx',
 
       PARTITION p1 VALUES LESS THAN (6000000)
       DATA DIRECTORY = '/data2/data'
       INDEX DIRECTORY = '/data3/idx',
 
       PARTITION p2 VALUES LESS THAN (9000000)
       DATA DIRECTORY = '/data4/data'
       INDEX DIRECTORY = '/data5/idx',
 
       PARTITION p3 VALUES LESS THAN MAXVALUE     DATA DIRECTORY = '/data6/data' 
       INDEX DIRECTORY = '/data7/idx'
);

在这里,将用户表分成4个分区,以每300万条记录为界限,每个分区都有自己独立的数据、索引文件的存放目录,与此同时,这些目录所在的物理磁盘分区可能也都是完全独立的,可以提高磁盘IO吞吐量。

1.2、LIST 类型

CREATE TABLE category (
     cid INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
     name VARCHAR(30) NOT NULL DEFAULT ''
)
PARTITION BY LIST (cid) (
     PARTITION p0 VALUES IN (0,4,8,12)
     DATA DIRECTORY = '/data0/data' 
     INDEX DIRECTORY = '/data1/idx',
     
     PARTITION p1 VALUES IN (1,5,9,13)
     DATA DIRECTORY = '/data2/data'
     INDEX DIRECTORY = '/data3/idx',
     
     PARTITION p2 VALUES IN (2,6,10,14)
     DATA DIRECTORY = '/data4/data'
     INDEX DIRECTORY = '/data5/idx',
     
     PARTITION p3 VALUES IN (3,7,11,15)
     DATA DIRECTORY = '/data6/data'
     INDEX DIRECTORY = '/data7/idx'
);

分成4个区,数据文件和索引文件单独存放。

1.3、 HASH 类型

CREATE TABLE users (
     uid INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
     name VARCHAR(30) NOT NULL DEFAULT '',
     email VARCHAR(30) NOT NULL DEFAULT ''
)
PARTITION BY HASH (uid) PARTITIONS 4 (
     PARTITION p0
     DATA DIRECTORY = '/data0/data'
     INDEX DIRECTORY = '/data1/idx',
 
     PARTITION p1
     DATA DIRECTORY = '/data2/data'
     INDEX DIRECTORY = '/data3/idx',
 
     PARTITION p2
     DATA DIRECTORY = '/data4/data'
     INDEX DIRECTORY = '/data5/idx',
 
     PARTITION p3
     DATA DIRECTORY = '/data6/data'
     INDEX DIRECTORY = '/data7/idx'
);

1.4、KEY 类型

CREATE TABLE users (
     uid INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
     name VARCHAR(30) NOT NULL DEFAULT '',
     email VARCHAR(30) NOT NULL DEFAULT ''
)
PARTITION BY KEY (uid) PARTITIONS 4 (
     PARTITION p0
     DATA DIRECTORY = '/data0/data'
     INDEX DIRECTORY = '/data1/idx',
     
     PARTITION p1
     DATA DIRECTORY = '/data2/data' 
     INDEX DIRECTORY = '/data3/idx',
     
     PARTITION p2 
     DATA DIRECTORY = '/data4/data'
     INDEX DIRECTORY = '/data5/idx',
     
     PARTITION p3 
     DATA DIRECTORY = '/data6/data'
     INDEX DIRECTORY = '/data7/idx'
);

分成4个区,数据文件和索引文件单独存放。

子分区是针对 RANGE/LIST 类型的分区表中每个分区的再次分割。再次分割可以是 HASH/KEY 等类型。

CREATE TABLE users (
     uid INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
     name VARCHAR(30) NOT NULL DEFAULT '',
     email VARCHAR(30) NOT NULL DEFAULT ''
)
PARTITION BY RANGE (uid) SUBPARTITION BY HASH (uid % 4) SUBPARTITIONS 2(
     PARTITION p0 VALUES LESS THAN (3000000)
     DATA DIRECTORY = '/data0/data'
     INDEX DIRECTORY = '/data1/idx',
 
     PARTITION p1 VALUES LESS THAN (6000000)
     DATA DIRECTORY = '/data2/data'
     INDEX DIRECTORY = '/data3/idx'
);

对 RANGE 分区再次进行子分区划分,子分区采用 HASH 类型。
或者

CREATE TABLE users (
     uid INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY,
     name VARCHAR(30) NOT NULL DEFAULT '',
     email VARCHAR(30) NOT NULL DEFAULT ''
)
PARTITION BY RANGE (uid) SUBPARTITION BY KEY(uid) SUBPARTITIONS 2(
     PARTITION p0 VALUES LESS THAN (3000000)
     DATA DIRECTORY = '/data0/data'
     INDEX DIRECTORY = '/data1/idx',
 
     PARTITION p1 VALUES LESS THAN (6000000)
     DATA DIRECTORY = '/data2/data'
     INDEX DIRECTORY = '/data3/idx'
);

对 RANGE 分区再次进行子分区划分,子分区采用 KEY 类型。

2、分区管理

2.1、删除分区

ALERT TABLE users DROP PARTITION p0;      #删除分区 p0

2.2、重建分区(ALTER TABLE … REORGANIZE …)

RANGE 分区重建

#将原来的 p0,p1 分区合并起来,放到新的 p0 分区中。
ALTER TABLE users REORGANIZE PARTITION p0,p1 INTO (PARTITION p0 VALUES LESS THAN (6000000));  

LIST 分区重建

#将原来的 p0,p1 分区合并起来,放到新的 p0 分区中。
ALTER TABLE users REORGANIZE PARTITION p0,p1 INTO (PARTITION p0 VALUES IN(0,1,4,5,8,9,12,13));

HASH/KEY 分区重建

#用 REORGANIZE 方式重建分区的数量变成2,在这里数量只能减少不能增加。想要增加可以用 ADD PARTITION 方法。
ALTER TABLE users REORGANIZE PARTITION COALESCE PARTITION 2; 

2.3、新增分区(ALTER TABLE … REORGANIZE …)

新增 RANGE 分区

#新增一个RANGE分区
ALTER TABLE category ADD PARTITION (PARTITION p4 VALUES IN (16,17,18,19)
            DATA DIRECTORY = '/data8/data'
            INDEX DIRECTORY = '/data9/idx');

新增 HASH/KEY 分区

ALTER TABLE users ADD PARTITION PARTITIONS 8;   #将分区总数扩展到8个。

给已有的表加上分区

alter table results partition by RANGE (month(ttime)) 
(
PARTITION p0 VALUES LESS THAN (1),
PARTITION p1 VALUES LESS THAN (2) , 
PARTITION p2 VALUES LESS THAN (3) ,
PARTITION p3 VALUES LESS THAN (4) , 
PARTITION p4 VALUES LESS THAN (5) ,
PARTITION p5 VALUES LESS THAN (6) , 
PARTITION p6 VALUES LESS THAN (7) ,
PARTITION p7 VALUES LESS THAN (8) , 
PARTITION p8 VALUES LESS THAN (9) ,
PARTITION p9 VALUES LESS THAN (10) , 
PARTITION p10 VALUES LESS THAN (11),
PARTITION p11 VALUES LESS THAN (12),
PARTITION P12 VALUES LESS THAN (13) 
);

3、分区默认限制分区字段

默认分区限制分区字段必须是主键(PRIMARY KEY)的一部分,为了去除此限制:
[方法1] 使用ID:

ALTER TABLE np_pk PARTITION BY HASH( TO_DAYS(added) ) PARTITIONS 4;

ALTER TABLE np_pk PARTITION BY HASH(id) PARTITIONS 4;

[方法2] 将原有PK去掉生成新PK

alter table results drop PRIMARY KEY;

alter table results add PRIMARY KEY(id, ttime);

4、测试分区前后性能

新建未分区表

create table no_part_tab (
        c1 int(11) default NULL,
        c2 varchar(30) default NULL,
        c3 date default NULL
       ) engine=myisam;

新建分区表

mysql> CREATE TABLE part_tab (
 c1 int default NULL, 
 c2 varchar(30) default NULL, 
 c3 date default NULL
) engine=myisam 
PARTITION BY RANGE (year(c3)) (PARTITION p0 VALUES LESS THAN (1995),
PARTITION p1 VALUES LESS THAN (1996) , PARTITION p2 VALUES LESS THAN (1997) ,
PARTITION p3 VALUES LESS THAN (1998) , PARTITION p4 VALUES LESS THAN (1999) ,
PARTITION p5 VALUES LESS THAN (2000) , PARTITION p6 VALUES LESS THAN (2001) ,
PARTITION p7 VALUES LESS THAN (2002) , PARTITION p8 VALUES LESS THAN (2003) ,
PARTITION p9 VALUES LESS THAN (2004) , PARTITION p10 VALUES LESS THAN (2010),
PARTITION p11 VALUES LESS THAN MAXVALUE );

通过存储过程灌入800万条测试数据

mysql> set sql_mode=''; /* 如果创建存储过程失败,则先需设置此变量, bug? */
mysql> delimiter //     /* 设定语句终结符为 //,因存储过程语句用;结束 */

  mysql> CREATE PROCEDURE load_part_tab()
  begin
   declare v int default 0;
   while v < 8000000
  do
   insert into part_tab
   values (v,'testing partitions',adddate('1995-01-01',(rand(v)*36520) mod 3652));
  set v = v + 1;
  end while;
  end
  //
  mysql> delimiter ;
  mysql> call load_part_tab();
  Query OK, 1 row affected (8 min 17.75 sec)

  mysql> insert into no_part_tab select * from part_tab;      //将800万数据复制到未分区的表no_part_tab 中

  Query OK, 8000000 rows affected (51.59 sec)
  Records: 8000000 Duplicates: 0 Warnings: 0

测试查询性能

mysql> select count(*) from part_tab where c3 > date'1995-01-01'and c3 < date'1995-12-31';
+----------+
| count(*) |
+----------+
|   795181 |
+----------+
  1 row in set (0.55 sec)

mysql> select count(*) from no_part_tab where c3 > date'1995-01-01'and c3 < date'1995-12-31'; 
+----------+
| count(*) |
+----------+
|   795181 |
+----------+
1 row in set (4.69 sec)

结果表明分区表比未分区表的执行时间少90%。

通过explain语句来分析执行情况

mysql > explain select count(*) from no_part_tab where c3 > date('1995-01-01') and c3 < date ('1995-12-31') \G    #结尾的\G使得mysql的输出改为列模式 
  *************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: no_part_tab
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 8000000               #需要查询800万条记录
        Extra: Using where
  1 row in set (0.00 sec)

  mysql> explain select count(*) from part_tab where c3 > date ('1995-01-01') and c3 < date ('1995-12-31') \G

  *************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: part_tab
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 798458               #只需要查询798458条记录
        Extra: Using where
  1 row in set (0.00 sec)

试验创建索引后情况

mysql> create index idx_of_c3 on no_part_tab (c3);
Query OK, 8000000 rows affected (1 min 18.08 sec)
Records: 8000000 Duplicates: 0 Warnings: 0

mysql> create index idx_of_c3 on part_tab (c3);
Query OK, 8000000 rows affected (1 min 19.19 sec)
Records: 8000000 Duplicates: 0 Warnings: 0

创建索引后的数据库文件大小列表:

2021-05-24 09:23             8,608 no_part_tab.frm
2021-05-24 09:24       255,999,996 no_part_tab.MYD
2021-05-24 09:24        81,611,776 no_part_tab.MYI
2021-05-24 09:25                 0 part_tab#P#p0.MYD
2021-05-24 09:26             1,024 part_tab#P#p0.MYI
2021-05-24 09:26        25,550,656 part_tab#P#p1.MYD
2021-05-24 09:26         8,148,992 part_tab#P#p1.MYI
2021-05-24 09:26        25,620,192 part_tab#P#p10.MYD
2021-05-24 09:26         8,170,496 part_tab#P#p10.MYI
2021-05-24 09:25                 0 part_tab#P#p11.MYD
2021-05-24 09:26             1,024 part_tab#P#p11.MYI
2021-05-24 09:26        25,656,512 part_tab#P#p2.MYD
2021-05-24 09:26         8,181,760 part_tab#P#p2.MYI
2021-05-24 09:26        25,586,880 part_tab#P#p3.MYD
2021-05-24 09:26         8,160,256 part_tab#P#p3.MYI
2021-05-24 09:26        25,585,696 part_tab#P#p4.MYD
2021-05-24 09:26         8,159,232 part_tab#P#p4.MYI
2021-05-24 09:26        25,585,216 part_tab#P#p5.MYD
2021-05-24 09:26         8,159,232 part_tab#P#p5.MYI
2021-05-24 09:26        25,655,740 part_tab#P#p6.MYD
2021-05-24 09:26         8,181,760 part_tab#P#p6.MYI
2021-05-24 09:26        25,586,528 part_tab#P#p7.MYD
2021-05-24 09:26         8,160,256 part_tab#P#p7.MYI
2021-05-24 09:26        25,586,752 part_tab#P#p8.MYD
2021-05-24 09:26         8,160,256 part_tab#P#p8.MYI
2021-05-24 09:26        25,585,824 part_tab#P#p9.MYD
2021-05-24 09:26         8,159,232 part_tab#P#p9.MYI
2021-05-24 09:25             8,608 part_tab.frm
2021-05-24 09:25                68 part_tab.par

再次测试SQL性能

mysql> select count(*) from no_part_tab where c3 > date ('1995-01-01') and c3 < date ('1995-12-31');
+----------+
| count(*) |
+----------+
|   795181 |
+----------+
  1 row in set (2.42 sec)   # 为原来4.69 sec 的51%

  #重启mysql ( net stop mysql, net start mysql)后,查询时间降为0.89 sec,几乎与分区表相同。

 

  mysql> select count(*) from part_tab where c3 > date ('1995-01-01') and c3 < date ('1995-12-31');

  +----------+
  | count(*) |
  +----------+
  |   795181 |
  +----------+
  1 row in set (0.86 sec)

更进一步的试验:
增加日期范围

mysql> select count(*) from no_part_tab where c3 > date ('1995-01-01') and c3 < date ('1997-12-31');
+----------+
| count(*) |
+----------+
| 2396524 |
+----------+
1 row in set (5.42 sec)

mysql> select count(*) from part_tab where c3 > date ('1995-01-01') and c3 < date ('1997-12-31');
+----------+
| count(*) |
+----------+
| 2396524 |
+----------+
  1 row in set (2.63 sec)


增加未索引字段查询

mysql> select count(*) from no_part_tab where c3 > date ('1995-01-01') and c3 < date ('1996-12-31') and c2='hello';
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (11.52 sec)

mysql> select count(*) from part_tab where c3 > date ('1995-01-01') and c3 < date ('1996-12-31') and c2='hello';
+----------+
| count(*) |
+----------+
|        0 |
+----------+
1 row in set (0.75 sec)

结论:

  • 对于大数据量,建议使用分区功能。
  • 去除不必要的字段
  • 根据手册, 增加myisam_max_sort_file_size 会增加分区性能( mysql重建索引时允许使用的临时文件最大大小)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值