SELECT table_name, data_free * 100 / data_length , data_free, data_length FROM information_schema.TABLES WHERE table_schema='dbschemename'
Some delete often table will have much fragments, if it's too high and the table data is not too much, could defragment. For INNODB table does not need to do defragment (optimize table)
ALTER TABLE zm_meeting_history engine=innodb will not block DML
these two still block DML: Waiting for table metadata lock,
ALTER TABLE zm_meeting_history FORCE
OPTIMIZE TABLE zm_meeting_history
this problem may depends on data, first ALTER TABLE `zm_polling` FORCE, ALGORITHM=INPLACE, LOCK=NONE not work
after delete ALTER TABLE zm_polling DROP INDEX meeting_number still not work
then readd ALTER TABLE zm_polling ADD INDEX meeting_number(meeting_number), it could, confused how data will impact this?
meeting_number is bigint
answer:
Upgrade to 5.6.27 or later
- The optimized table compacts the primary key (clustered index) to its default 15/16ths fill factor per page. But other indexes will be built in pseudo-random order and are likely to end up just as fragmented afterwards as before. Which indexes are more important for performance? Maybe the primary key is just a dummy value that’s not even used, and the secondary indexes are the ones that would benefit from compacting.
- Suppose the primary key is the important one, and SELECT queries will perform more quickly if it’s defragmented. Why does it get fragmented? Because of changes to the table. Now these changes could suddenly slow down dramatically as they are forced to split pages at a much higher rate due to the more compact data layout.
http://www.xaprb.com/blog/2010/02/07/how-often-should-you-use-optimize-table/
https://www.percona.com/blog/2010/12/09/mysql-optimize-tables-innodb-stop/
来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/545828/viewspace-2101457/,如需转载,请注明出处,否则将追究法律责任。
转载于:http://blog.itpub.net/545828/viewspace-2101457/