MySQL性能测试--分区和分表测试

一,测试服务器配置:

Cpu: 2*4核Intel(R) Xeon(R) E5405 @ 2.00GHz

内存:4G

存储:36块1T的7.2K的SATA组成raid0。使用其中的1T

Mysql版本:5.1.45-community-log MySQL Community Server (GPL)

二,建测试表:

Myisam不分区不分表,单表10亿数据量:

create table biguser

(

   ID                   bigint not null auto_increment comment ‘自增ID’,

   Username             varchar(30) not null comment ‘用户名’,

   sex                  tinyint not null comment ‘性别’,

   birthday             datetime not null comment ‘生日’,

   email                varchar(50) not null comment ‘邮箱’,

   tel                  int not null comment ‘手机’,

   intro                varchar(255) not null comment ‘个人简介’,

   primary key (ID)

)

type = MYISAM

default charset=utf8;

Myisam不分区,分10个表,单表1亿数据量:

create table user01

(

   ID                   bigint not null auto_increment comment ‘自增ID’,

   Username             varchar(30) not null comment ‘用户名’,

   sex                  tinyint not null comment ‘性别’,

   birthday             datetime not null comment ‘生日’,

   email                varchar(50) not null comment ‘邮箱’,

   tel                  int not null comment ‘手机’,

   intro                varchar(255) not null comment ‘个人简介’,

   primary key (ID)

)

type = MYISAM

default charset=utf8;

*一共10个表,user01,user02,…,user10

 

 

Myisam用id进行hash分区,分10个区,每个分区1亿数据:

create table puser

(

   ID                   bigint not null auto_increment comment ‘自增ID’,

   Username             varchar(30) not null comment ‘用户名’,

   sex                  tinyint not null comment ‘性别’,

   birthday             datetime not null comment ‘生日’,

   email                varchar(50) not null comment ‘邮箱’,

   tel                  int not null comment ‘手机’,

   intro                varchar(255) not null comment ‘个人简介’,

   primary key (ID)

)

type = MYISAM

default charset=utf8

partition by hash(ID)

partitions 10;  

Innodb用id进行hash分区,分10个区,每个分区1亿数据:

create table iuser

(

   ID                   bigint not null auto_increment comment ‘自增ID’,

   Username             varchar(30) not null comment ‘用户名’,

   sex                  tinyint not null comment ‘性别’,

   birthday             datetime not null comment ‘生日’,

   email                varchar(50) not null comment ‘邮箱’,

   tel                  int not null comment ‘手机’,

   intro                varchar(255) not null comment ‘个人简介’,

   primary key (ID)

)

type = Innodb

default charset=utf8

partition by hash(ID)

partitions 10;

 

 

Innodb不分区不分表,单表10亿数据量:

create table ibiguser

(

   ID                   bigint not null auto_increment comment ‘自增ID’,

   Username             varchar(30) not null comment ‘用户名’,

   sex                  tinyint not null comment ‘性别’,

   birthday             datetime not null comment ‘生日’,

   email                varchar(50) not null comment ‘邮箱’,

   tel                  int not null comment ‘手机’,

   intro                varchar(255) not null comment ‘个人简介’,

   primary key (ID)

)

type = Innodb

default charset=utf8;

三,数据生成

利用power design生成100万不重复的数据的基础表,然后将此表数据用mysqlslap工具重复插入到目标表,可以快速生成海量数量表。最终数据文件大小都在170G左右(分表情况单表17G数据文件)

插入100万记录到目标表的性能如下

单位(秒):

Myisam不分区Myisam分区Innodb不分区Innodb分区
10.5149.11416.25816.893

可见非并发的大量数据插入myisam分区性能最佳,myisam不分区次之,innodb分区较差。

 

 

四,压力测试

利用mysql5.1自带的压力测试工具mysqlslap对以上四种情况进行了压力测试,测试结果如下(单位:秒):

 myisam
未分区
myisam
分区
Myisam
分表
Innodb
未分区
innodb
分区
100连续随机查询4.554.115.3894.4455.893
      
并发50随机查询0.0520.0490.0750.070.038
并发100随机查询0.0710.0780.110.0880.055
并发250随机查询0.1370.1530.2390.1750.124
并发500随机查询0.2660.2730.4470.2830.24
并发1000随机查询0.50.5330.8490.4850.46
      
并发50随机插入0.0370.030.040.0420.025
并发100随机插入0.0710.060.0570.060.049
并发250随机插入0.1790.1590.1440.1350.132
并发500随机插入0.3840.3270.3320.2480.25
并发1000随机插入0.820.730.7370.5420.526
      
并发50随机删除1.2041.5360.1960.0410.039
并发100随机删除2.3332.9450.3620.0620.063
并发250随机删除5.5685.7180.8310.140.173
并发500随机删除9.6639.7051.420.2750.308
并发1000随机删除19.87717.2542.6550.5890.634
      
并发50随机更新1.0080.8870.2980.0440.046
并发100随机更新1.7891.6880.5330.0640.062
并发250随机更新4.2913.9861.0290.1340.145
并发500随机更新8.3577.8721.7830.3060.288
并发1000随机更新15.96315.1263.330.5750.563

 

*运行了一个100条语句的批查询,看非并发情况下的查询效率,查询语句见附一。

*利用存储过程进行一个hash处理,去对不同的表进行数据操作,见附二。

五,结论:

一,在高并发情况下,innodb的分区表的增,删,改,查的性能都优于myisam引擘,特别是在高并发删除和更新的场合下,两者相差可以达到20-30倍。

二,myisam的分区表(10个分区)和未分区表的性能相差不大,略优于未分区表。

三,myisam分区表(10个分区)的插入性能与分表(10个表)差不多,但查询性能要好30%左右。但删除与更新的性能分表要远远好于分区。

四,innodb分区表和未分区表查询和插入在低并发下,相差40%。但在并发性不断增加的情况下,性能相差不大。删除和更新性能则一直相近。
附一:

select sql_no_cache * from biguser where id=       14988469         ;

select sql_no_cache * from biguser where id=       57611709         ;

select sql_no_cache * from biguser where id=       88737417         ;

select sql_no_cache * from biguser where id=       5487431  ;

select sql_no_cache * from biguser where id=       75912772         ;

select sql_no_cache * from biguser where id=       39680341         ;

select sql_no_cache * from biguser where id=       33444232         ;

select sql_no_cache * from biguser where id=       53937959         ;

select sql_no_cache * from biguser where id=       29031225         ;

select sql_no_cache * from biguser where id=       1256108  ;

select sql_no_cache * from biguser where id=       25314369         ;

select sql_no_cache * from biguser where id=       24296361         ;

select sql_no_cache * from biguser where id=       65966713         ;

select sql_no_cache * from biguser where id=       14395018         ;

select sql_no_cache * from biguser where id=       69197350         ;

select sql_no_cache * from biguser where id=       90962562         ;

select sql_no_cache * from biguser where id=       67908690         ;

select sql_no_cache * from biguser where id=       34352967         ;

select sql_no_cache * from biguser where id=       76458078         ;

select sql_no_cache * from biguser where id=       9986401  ;

select sql_no_cache * from biguser where id=       92969693         ;

select sql_no_cache * from biguser where id=       21688034         ;

select sql_no_cache * from biguser where id=       23534033         ;

select sql_no_cache * from biguser where id=       49682571         ;

select sql_no_cache * from biguser where id=       70230582         ;

select sql_no_cache * from biguser where id=       23174504         ;

select sql_no_cache * from biguser where id=       54671332         ;

select sql_no_cache * from biguser where id=       60255403         ;

select sql_no_cache * from biguser where id=       33064209         ;

select sql_no_cache * from biguser where id=       30657185         ;

select sql_no_cache * from biguser where id=       98478707         ;

select sql_no_cache * from biguser where id=       27247625         ;

select sql_no_cache * from biguser where id=       20287058         ;

select sql_no_cache * from biguser where id=       67371577         ;

select sql_no_cache * from biguser where id=       18519687         ;

select sql_no_cache * from biguser where id=       55236080         ;

select sql_no_cache * from biguser where id=       21413273         ;

select sql_no_cache * from biguser where id=       2928926  ;

select sql_no_cache * from biguser where id=       82738229         ;

select sql_no_cache * from biguser where id=       47525876         ;

select sql_no_cache * from biguser where id=       19732304         ;

select sql_no_cache * from biguser where id=       63153463         ;

select sql_no_cache * from biguser where id=       1685048  ;

select sql_no_cache * from biguser where id=       4675229  ;

select sql_no_cache * from biguser where id=       14525956         ;

select sql_no_cache * from biguser where id=       86972953         ;

select sql_no_cache * from biguser where id=       21576455         ;

select sql_no_cache * from biguser where id=       5341705  ;

select sql_no_cache * from biguser where id=       66182739         ;

select sql_no_cache * from biguser where id=       54662774         ;

select sql_no_cache * from biguser where id=       59012070         ;

select sql_no_cache * from biguser where id=       52293127         ;

select sql_no_cache * from biguser where id=       29983640         ;

select sql_no_cache * from biguser where id=       21078779         ;

select sql_no_cache * from biguser where id=       14937593         ;

select sql_no_cache * from biguser where id=       17458594         ;

select sql_no_cache * from biguser where id=       7928885  ;

select sql_no_cache * from biguser where id=       73174679         ;

select sql_no_cache * from biguser where id=       21741457         ;

select sql_no_cache * from biguser where id=       86001336         ;

select sql_no_cache * from biguser where id=       52555158         ;

select sql_no_cache * from biguser where id=       23325556         ;

select sql_no_cache * from biguser where id=       63580254         ;

select sql_no_cache * from biguser where id=       83699681         ;

select sql_no_cache * from biguser where id=       10742003         ;

select sql_no_cache * from biguser where id=       81051582         ;

select sql_no_cache * from biguser where id=       45963004         ;

select sql_no_cache * from biguser where id=       95947261         ;

select sql_no_cache * from biguser where id=       19606015         ;

select sql_no_cache * from biguser where id=       24078829         ;

select sql_no_cache * from biguser where id=       676655     ;

select sql_no_cache * from biguser where id=       30713754         ;

select sql_no_cache * from biguser where id=       29905848         ;

select sql_no_cache * from biguser where id=       99763851         ;

select sql_no_cache * from biguser where id=       1495837  ;

select sql_no_cache * from biguser where id=       71316631         ;

select sql_no_cache * from biguser where id=       88798686         ;

select sql_no_cache * from biguser where id=       94214594         ;

select sql_no_cache * from biguser where id=       54310537         ;

select sql_no_cache * from biguser where id=       81828596         ;

select sql_no_cache * from biguser where id=       89927714         ;

select sql_no_cache * from biguser where id=       72684768         ;

select sql_no_cache * from biguser where id=       50764835         ;

select sql_no_cache * from biguser where id=       50720945         ;

select sql_no_cache * from biguser where id=       63690819         ;

select sql_no_cache * from biguser where id=       14890423         ;

select sql_no_cache * from biguser where id=       94762181         ;

select sql_no_cache * from biguser where id=       45487268         ;

select sql_no_cache * from biguser where id=       50795204         ;

select sql_no_cache * from biguser where id=       43415976         ;

select sql_no_cache * from biguser where id=       44953453         ;

select sql_no_cache * from biguser where id=       83626479         ;

select sql_no_cache * from biguser where id=       42730160         ;

select sql_no_cache * from biguser where id=       15229715         ;

select sql_no_cache * from biguser where id=       22058075         ;

select sql_no_cache * from biguser where id=       35277893         ;

select sql_no_cache * from biguser where id=       90205735         ;

select sql_no_cache * from biguser where id=       12501310         ;

select sql_no_cache * from biguser where id=       24653162         ;

select sql_no_cache * from biguser where id=       34248456         ;

附二:

use nopart ;

drop procedure p_hash_select;

delimiter //

create procedure p_hash_select(vid bigint)

begin

set @i=mod(vid,10);

if @i=1 then

  select * from user01 where;

end if;

if @i=2 then

  select * from user02 where;

end if;

if @i=3 then

  select * from user03 where;

end if;

if @i=4 then

  select * from user04 where;

end if;

if @i=5 then

  select * from user05 where;

end if;

if @i=6 then

  select * from user06 where;

end if;

if @i=7 then

  select * from user07 where;

end if;

if @i=8 then

  select * from user08 where;

end if;

if @i=9 then

  select * from user09 where;

end if;

if @i=0 then

  select * from user10 where;

end if;


------------------------------------------------------------------------------------------

Mysql分区表有:range 、list、hash、key这几种类型。 此次测试主要采用List分区方式进行。
表结构如下:

CREATE TABLE IF NOT EXISTS `tbtopic_p` (   `iTid` int(11) NOT NULL,   `iPraise` int(11) NOT NULL,   `iDate` int(11) NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; PARTITION BY LIST (iTid) (PARTITION p0 VALUES IN (3) ENGINE = InnoDB,  PARTITION p1 VALUES IN (13) ENGINE = InnoDB,  PARTITION p2 VALUES IN (23) ENGINE = InnoDB,  PARTITION p3 VALUES IN (33) ENGINE = InnoDB,  PARTITION p4 VALUES IN (43) ENGINE = InnoDB,  PARTITION p5 VALUES IN (53) ENGINE = InnoDB,  PARTITION p6 VALUES IN (63) ENGINE = InnoDB,  PARTITION p7 VALUES IN (73) ENGINE = InnoDB,  PARTITION p8 VALUES IN (83) ENGINE = InnoDB,  PARTITION p9 VALUES IN (93) ENGINE = InnoDB)





数据量为971,689  接近百万。
下面分别对未分区和分区两种情况做查询的性能测试:
未分区:
SELECT *    FROM `tbtopic3`    WHERE `iTid` =3     
三次耗时分别为 :   0.0244   0.0252  0.0346
分区:
SELECT * FROM `tbtopic_p` WHERE  `iTid` = 3  
三次耗时分别为 :  0.0136    0.0012  0.0013 

综上, 分区后查询性能大大提升。




  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值