学习笔记:Rick's RoTs -- Rules of Thumb for MySQL

 

 

Brought to you by Rick James

Here are 170+ tips, tricks, suggestions, etc. They come from a decade of improving performance in MySQL in thousands of situations. There are exceptions to the statements below, but they should help guide you into better understanding how to effectively use MySQL. 

Except where noted, the tips in this document apply to MySQL, MariaDB, and Percona. 

SELECTs -- do's and don'ts

RoTsDiscussion

    ⚈  Do not hide an indexed column inside a function call:  DATE(x) = '...' or LCASE(col) = 'foo' 

    ⚈  LCASE() is usually unnecessary because the collation will compare 'correctly' without it. 

    ⚈  Subqueries usually perform poorly; try to turn into JOIN. ↣ 

    ⚈  Usually IN (SELECT...) optimizes poorly. Turn into JOIN. (5.6.5 improves) ↣ 

    ⚈  A subquery that condenses data (GROUP BY, LIMIT, etc) may perform well 

    ⚈  OR may be very inefficient; turn into UNION

    ⚈  A coding pattern: 
dt >= '2010-02-01' AND
    dt <  '2010-02-01' + INTERVAL 7 DAY
 


    ⚈  WHERE (a,b) > (7,8) is poorly optimized 

    ⚈  Gather these to study a slow query: SHOW CREATE TABLE and EXPLAIN SELECT .... ↣ 

    ⚈  Do not use OFFSET for pagination -- continue where you "left off". ↣ 

    ⚈  Don't mix DISTINCT and GROUP BY 

    ⚈  DISTINCT is not a function. ↣ 

    ⚈  Be explicit about UNION ALL vs UNION DISTINCT -- it makes you think about which to use 

    ⚈  Do not use SELECT * except for debugging or when fetching into a hash. (Instead, spell out the columns you need.) ↣ 

    ⚈  VIEWs are often poorly optimized. ↣ 

    ⚈  A subquery in the FROM clause may be useful for retrieving BLOBs without sorting them: Speed up a query by first finding the IDs, then self-JOIN to fetch the rest. 

    ⚈  Don't splay an array across columns; use another table. 
Subqueries came to MySQL rather late in the game. For many years, they were not well optimized, so it is usually better to turn your SELECTs into an equivalent JOIN. This is especially true for IN ( SELECT ... ). Optimization improved some in 5.6.5 and MariaDB 5.5. 5.7 improves more. 

Sometimes a subquery is really the best way to optimize a SELECT. The common thread of these "good" subqueries seems to be when the subquery has to scan a lot of rows, but boils down the intermediate resultset to a small number of rows. This is likely to happen with GROUP BY or LIMIT in the subquery. 

SELECT DISTINCT(a), b FROM ... is the same as SELECT DISTINCT a, b FROM .... That is, the combination of all the columns are de-dupped, not just the one in parentheses. 

For analyzing a slow query, SHOW CREATE TABLE provides the datatypes, indexes, and engine. (DESCRIBE provides much less info.) SHOW TABLE STATUS tells how big the table is. EXPLAIN says how the query optimizer is likely to perform the query. EXPLAIN FORMAT=JSON gives more details. The OPTIMIZER_TRACE can give more. 

It is so tempting to use ORDER BY id LIMIT 30,10 to find the 4th page of 10 items. But it is so inefficient, especially when you have thousands of pages. The thousandth page has to read (at some level) all the pages before it. Instead, remember where you "left off"; that is, have the "Next" button on one page give the id (or other sequencing info) of where the next page can be found. Then that page simply does WHERE id > $leftoff  ORDER BY id  LIMIT 10More on pagination.

"SELECT *" will break your code tomorrow when you add another field. It is better to spell out the fields explicitly. (There is no noticeable performance difference.) 

VIEW are syntactic sugar around a SELECT. The Optimizer can't do any better with a VIEW than with the underlying SELECT. Sometimes it does worse, though the Optimizer is improving over time. 



INDEXing

RoTsDiscussion

    ⚈  Start an INDEX with "="s from the WHERE clause, then one other thing (range, group, order). ↣ 

    ⚈  Terms: PRIMARY KEY > UNIQUE > INDEX = KEY. ↣ 

    ⚈  An index may speed up a SELECT by orders of magnitude and will slow down INSERTs a little. (Usually this is a good tradeoff.) 

    ⚈  Adding indexes is not a panacea. 

    ⚈  BTree is an excellent all-around indexing mechanism. 

    ⚈  A BTree index node contains ~100 items. (1M rows = ~3 levels; 1B rows = ~5 levels) 

    ⚈  Flags, and other fields with few values ("low cardinality"), should not be alone in an index -- the index won't be used. 

    ⚈  MySQL rarely uses two INDEXes in one SELECT. Main exceptions: subqueries, UNION. 

    ⚈  A "prefix" index -- INDEX(name(10)) -- is rarely useful. Exception: TEXT 

    ⚈  A UNIQUE "prefix" is probably wrong -- UNIQUE(name(10)) forces only 10 chars to be unique. 

    ⚈  It is ok to have Index_length > Data_length. 

    ⚈  5 columns in a compound index seems "too many" 

    ⚈  Having no "compound" (aka "composite") indexes is a clue that you do not understand their power. INDEX(a,b) may be much better than INDEX(a), INDEX(b) 

    ⚈  The columns in a composite index are used left to right. 

    ⚈  INDEX(a,b) covers for INDEX(a), so drop the latter. 

    ⚈  ORDER BY a ASC, b DESC cannot use INDEX(a ASC,b DESC); 8.0 fixes this 

    ⚈  2x speedup when "Using index" (a "covering" index). ("Using index condition" is a different animal.) ↣ 

    ⚈  2 out of 3 tables have a perfectly good "natural" PRIMARY KEY, so AUTO_INCREMENT is not necessary. 

    ⚈  FULLTEXT -- watch out for min_word_len, stopwords, and 50% rule. MyISAM and InnoDB have differences. 

    ⚈  A FULLTEXT index will be used before any other index. 

    ⚈  FULLTEXT -- consider Syphinx, Lucene, Solr, etc (3rd Party). 
Indexing is very important to any database. Getting the "right" index can make a query run orders of magnitude faster. So, how to do that? Often "compound indexes" (multiple columns in a single INDEX(...)) are better than single-column indexes. A WHERE clause that has column=constant begs for an index that starts with that column. If the WHERE clause has multiple fields AND'd together, "="s should come first. 

INDEXes are structured in BTrees. The "root" node (a block) of a BTree has pointers to child blocks. This goes as deep as necessary, but really not very deep (see the 100x RoT). MyISAM uses 1KB blocks; InnoDB uses 16KB blocks. 

Each INDEX is its own BTree. A PRIMARY KEY is a UNIQUE key is an INDEX. INDEX and KEY are synonomous. In InnoDB, the data is included in the BTree for the PRIMARY KEY. In MyISAM, the data is a separate file (.MYD). 

A "covering" index is one where all the fields needed in a SELECT are included in the INDEX. 

Index Cookbook
(That includes tips on many:many mapping table and wp_postmeta.)



ENGINE Differences

RoTsDiscussion

    ⚈  InnoDB is faster than MyISAM -- contradicts the 'old' wisdom; InnoDB has improved. ↣ 

    ⚈  Use InnoDB instead of MyISAM. MyISAM is going away, not supported, fragile, etc. ↣ 

    ⚈  2x-3x bigger disk footprint for InnoDB than MyISAM 

    ⚈  When converting from MyISAM to InnoDB, do learn the differences in INDEXes, especially the PRIMARY KEY. 

    ⚈  A secondary index in InnoDB implicitly includes the PRIMARY KEY. 

    ⚈  An InnoDB table should have an explicit PRIMARY KEY, even if it is an artificial AUTO_INCREMENT. 

    ⚈  "Rows" and "Avg_row_length" (SHOW TABLE STATUS) may be off by a factor of 2 (either way) for InnoDB. 

    ⚈  For MyISAM, don't OPTIMIZE TABLE unless there is more than 10% Data_free according to SHOW TABLE STATUS 

    ⚈  For InnoDB, don't OPTIMIZE TABLE -- it is rarely of any use (and it turns into an ALTER) 

    ⚈  Don't let a BEGIN...COMMIT last more than a few seconds. ↣ 

    ⚈  Use BEGIN, not autocommit=0 -- it is less confusing. 
MyISAM dates back to the '90s. It was designed to be simple and fast, while lacking important database features (ACID). Since InnoDB came along in the '00s, most energy has gone into improving it. Today, InnoDB, (and Percona's Xtradb) are excellent engines. InnoDB recovers automatically after a crash. 

MyISAM's simplicity leads to very little overhead on disk. InnoDB's transactional semantics, MVCC, ROLLBACK, undo, etc, lead to a lot of disk overhead. Some can be mitigated, but it is a tradeoff between speed and robustness. 

MyISAM caches INDEX blocks in the "key_buffer". Data blocks are cached by the Operating system. InnoDB caches both data and indexes in the "buffer_pool". These lead to radically different tuning (see 70% RoT). 

With InnoDB, either use autocommit=1, making each operation into a "transaction", or use BEGIN(START TRANSACTION) and COMMIT to explicity bracket each set of statement(s) that make up a transaction. InnoDB can prevent many kinds of "deadlocks", and simply stall one transaction until it is safe to continue. Some deadlocks are easily detected after starting transactions, and lead to aborting one of the competing transactions; this ROLLBACK must be monitored for by the application. Still other situations can lead to hanging until a timeout. innodb_lock_wait_timeout, which defaults to 50 seconds. 50 is too large. 

MyISAM to InnoDB conversion tips

Galera Tips
Galera and Group Replication have other transactional implications.



Optimizations, and not

RoTsDiscussion

    ⚈  "Using Temporary" and "Filesort" are not the end of the world. And, often they are unavoidable. 

    ⚈  Might even get 2 Filesorts: GROUP BY x ORDER BY y 

    ⚈  Avoid using index hints (USE / FORCE / IGNORE / STRAIGHT_JOIN) ↣ 

    ⚈  You cannot "tune your way out of a CPU problem"; instead, rethink the slow queries. 

    ⚈  Do not strive for "Fixed" length rows in MyISAM. Usually it makes performance worse (by increasing I/O). 

    ⚈  If more than 20% of the rows are needed, a table scan is faster than using the obvious INDEX. (This RoT usually explains why "the optimizer did not use my index".) ↣ 

    ⚈  key-value stores perform poorly. Toss most k-v into a JSON blob. ↣ 

    ⚈  For fast loading use LOAD DATA or batched INSERTs. ↣ 

    ⚈  Use 100-1000 rows per batch INSERT (10x speedup). (Important to limit in Replication) ↣ 

    ⚈  Similarly, DELETE in chunks of 1000. ↣ 

    ⚈  Rows clustered together are 10x faster to access than random rows (fewer blocks to fetch) 

    ⚈  SHOW CREATE TABLE is more descriptive than DESCRIBE

    ⚈  innodb_flush_log_at_trx_commit = 1 for safety; = 2 for speed (often a lot of speed). RAID with battery-backed cache can use 1 (and be fast). Galera can use 2 (use SST for recovery). 

    ⚈  How seriously to take optimization? 1K rows: yawn; 1M rows; serious; 1B rows: all of these tips, plus more. 

    ⚈  "Profiling" a query is rarely useful. 
Index hints (FORCE INDEX, etc) may help you today, but different constants in the WHERE clause may lead FORCE to do the "wrong" thing tomorrow. 

If most rows can be eliminated by using an INDEX, then index is the efficient way to go. If not many rows can be eliminated, then the bouncing back and forth between the index and the data is likely to be more costly than ignoring the index and simply scanning the whole table. The 20% is approximate -- 10%-30% seems to be the typical range for the cutoff between the two algorithms. 

If you have a flexible number of attributes on a table, and are puzzling over how to build the schema, you might think to use a separate "key-value" or "EAV" table. Or you might add lots of columns, only to stumble over the messiness of adding more columns. Many "NoSQL" competitors try to make this easy. Within MySQL, put all these attributes in a JSON blob. Recommend compressing the blob. (3:1 compression is typical for English text, XML, code, etc.) Make 'real' columns for the important attributes that need to be indexed. BTW, MariaDB has "dynamic columns"; it even allows indexing sparse column hidden in their key-value blob.More discussion of EAV issues and solutions.

If you can arrange for rows to be "adjacent" to each other, then one disk fetch will bring in many rows (10x speedup on HDD; perhaps 4x for SSD). 

"Batched" INSERTs are where one INSERT statement has multiple rows. Nearly all of the performance benefit is in the first 100 rows; going beyond 1000 is really getting into 'diminishing returns'. Furthermore, in a Replication environment, a huge INSERT would cause the Slave to get 'behind'. 

More on Large Deletes / Updates
Indexing tip for WP's postmeta



PARTITIONing

RoTsDiscussion

    ⚈  Don't use PARTITION until you know how and why it will help. ↣ 

    ⚈  Don't use PARTITION unless you will have >1M rows 

    ⚈  No more than 50 PARTITIONs on a table (open, show table status, etc, are impacted) ↣ 

    ⚈  PARTITION BY RANGE is the only useful method. ↣ 

    ⚈  SUBPARTITIONs are not useful. 

    ⚈  The partition key should not be the first column in any index. 

    ⚈  It is OK to have an AUTO_INCREMENT as the first part of a compound key, or in a non-UNIQUE index. ↣ 
It is so tempting to believe that PARTITIONing will solve performance problems. But it is so often wrong. 

PARTITIONing splits up one table into several smaller tables. But table size is rarely a performance issue. Instead, I/O time and indexes are the issues. 

Perhaps the most common use case where PARTITIONing shines is in a the dataset where "old" data is deleted from the table periodically.PARTITION BY RANGE by day (or other unit of time) lets you do a nearly instantaneous DROP PARTITION plus REORGANIZE PARTITION instead of a much slower DELETE. 

An AUTO_INCREMENT column must be the first column in some index. (That lets the engine find the 'next' value when opening the table.) It does not have to be the only field, nor does it have to be PRIMARY or UNIQUE. If it is not UNIQUE, you could INSERT a duplicate id if you explicitly provide the number. (But you won't do that, will you?) 

The "Data Dictionary" of version 8.0 probably eliminates the 50-partition advice. ("Stay tuned.") 

Partition Maintenance (sliding timeframe)



Memory Usage

RoTsDiscussion

    ⚈  70% of RAM for innodb_buffer_pool_size (when using just InnoDB) ↣ 

    ⚈  20% of RAM for key_buffer_size (when using just MyISAM) ↣ 

    ⚈  Leave other memory tunables alone 

    ⚈  Do NOT let mysql swap 

    ⚈  thread_cache_size -- A small, non-zero, number (like 10) is good. 

    ⚈  If (Opened_tables / Uptime) > 1/sec, increase table_open_cache

    ⚈  Turn off the Query Cache: query_cache_type=off and query_cache_size=0. ↣ 

    ⚈  Set tmp_table_size and max_heap_table_size to about 1% of RAM. 
The 70% and 20% RoTs assume more than 4GB of RAM, the server is mostly for MySQL, and you are not using both engines. 

MySQL performance depends on being in control of its use of RAM. The biggest pieces are the caches for MyISAM or InnoDB. These caches should be tuned to use a large chunk of RAM. Other things that can be tuned rarely matter much, and the default values in my.cnf (my.ini) tend to be "good enough". 

The "Query Cache" is totally distinct from the key_buffer and the buffer_pool. ALL QC entries for one table are purged when ANY change to that table occurs. Hence, if a table is being frequently modified, the QC is virtually useless. And having the QC large slows things down (except in Aurora). 

Tuning memory settings



Character Sets

RoTsDiscussion

    ⚈  Use SET NAMES utf8 (or equivalent). 

    ⚈  Use only ascii and utf8 (or utf8mb4 in 5.5.3+) 

    ⚈  MySQL's utf8mb4 is equivalent to the outside world's UTF-8; MySQL's utf8 is only a subset. 

    ⚈  Better COLLATION: utf8_unicode_520_ci > utf8_unicode_ci > utf8_general_ci > utf8_bin 

    ⚈  Debug stored data via HEX(col)LENGTH(col)CHAR_LENGTH(col) 

    ⚈  Do not use utf8 for hex or ascii strings (GUID, UUID, sha1, md5, ip_address, country_code, postal_code, etc.) 
Too many people figure that MySQL will 'just work' when it comes to utf8. Well, it doesn't. And it is complex. And if you mess it up, it is complicated to un-mess up. 

European accents take 1 byte in latin1; 2 in utf8. LENGTH(col) >= CHAR_LENGTH(col): with European text '=' for latin1, '>' for utf8. 

Troubleshooting charset problems
Interpreting HEX



Datatypes - Directly supported

RoTsDiscussion

    ⚈  INT(5) is not what you think. (See SMALLINT, etc) 

    ⚈  FLOAT(7,2) -- No; just say FLOAT 

    ⚈  Learn the sizes: INT & FLOAT are 4 bytes, etc 

    ⚈  Before using BIGINT (8 bytes), ask whether you really need such a big range. 

    ⚈  Almost never use CHAR instead of VARCHAR. Use CHAR for things that are truly fixed length (and usually ascii). 

    ⚈  Do not have separate DATE and TIME columns, nor separate YEAR, MONTH, etc. 

    ⚈  Most INTs should be UNSIGNED

    ⚈  Most columns should be NOT NULL

    ⚈  TIMESTAMP's DEFAULT changed in 5.6.5. 

    ⚈  VARCHAR(255) has some drawbacks over VARCHAR(11). ↣ 

    ⚈  Overlapping time ranges (shorter expression, but still not indexable): WHERE a.start < b.end AND a.end > b.start 

    ⚈  Don't be surprised by AUTO_INCREMENT values after uncommon actions. ↣ 
If you have a million rows in a table, then the space difference between INT (4 bytes) and TINYINT (1 byte) is 3MB. So, if you have large tables, learn the sizes of each datatype, and pick the datatypes with an eye to minimizing the table size. Smaller → More cacheable → Less I/O → Faster. 

An AUTO_INCREMENT is very non-random, at least for inserting. Each new row will be on the 'end' of the table. That is, the last block is "hot spot". Thanks to caching very little I/O is needed for an AUTO_INCREMENT index. 

VARCHAR(255) for everything is tempting. And for "small" tables it won't hurt. For large tables one needs to consider what happens during the execution of complex SELECTs. VARCHAR(255) + 5.6 + Indexing + utf8mb4 = Error about a 767 limitation. See Workarounds for 767

"Burning ids": A DELETE of the last row may or many not burn that AUTO_INCREMENT id. INSERT IGNOREburns ids because it allocates values before checking for duplicate keys. A Slave may see InnoDB ids arriving out of order (because transactions arrive in COMMIT order). A ROLLBACK (explicit or implicit) will burn any ids already allocated to INSERTs. REPLACE = DELETE + INSERT, so the INSERT comments apply to REPLACE. After a crash, the next id to be assigned may or may not be what you expect; this varies with Engine and version. Etc. 



Datatypes - Implicit

RoTsDiscussion

    ⚈  Money -- Do not use FLOAT/DOUBLE. DECIMAL(13,4) (6 bytes) is probably wide enough for any currency this decade. 

    ⚈  GUID/UUID/MD5, as a key, will perform poorly when the table is big. ↣ 

    ⚈  Store GUID/UUID/MD5/SHA1 in BINARY(16/20), not VARCHAR(36/32/40) (after suitable conversion). ↣ 

    ⚈  IP addresses -- don't ignore IPv6, it's here now! VARBINARY(39) or BINARY(16) ↣ 

    ⚈  Most SEQUENCE uses can/should be converted to AUTO_INCREMENT. See also MariaDB's sequence table. 

    ⚈  "Don't Queue it, just Do it" -- A database does not make a good queuing system. 

    ⚈  Store images in a table? No clearcut decision. 

    ⚈  Do NOT store credit card numbers, SSNs, etc, until you learn the legal ramifications of your db being hacked! 

    ⚈  Latitude/Longitude -- to distinguish houses, DECIMAL(6,4)/(7,4) is sufficient. Using DOUBLE lets you distinguish hairs on a flea; do you need that? ↣ 

    ⚈  JSON -- If the version does not have a JSON datatype, consider using TEXT or compressing (in the client) and using BLOB. ↣ 
GUIDs (etc) are very random. A big table with an index on such a field will be costly to maintain. That is, the 'next' GUID to be inserted is not likely to be in an index block that is currently cached. UUID comments

Since GUID, UUID, MD5, and SHA1 are fixed length, VAR is not needed. If they are in hex, don't bother with utf8; use BINARY or CHAR CHARSET ascii. 

Images could be stored in BLOB (not TEXT). This better assures referential integrity (not accidentally deleting the metadata or image, but not both). On the other hand, it is clumsy. With files, an img tag can point directly to the image on disk. 

Efficient algorithm for finding "nearest" with latitude/longitude

Scaling IP range data

Using JSON for Entity-Attribute-Value



Hardware

RoTsDiscussion

    ⚈  "Count the disk hits" -- For I/O-bound queries this is the metric to use. ↣ 

    ⚈  A schema solution is usually better than a hardware solution for performance. 

    ⚈  Plain disks can handle 100 reads/writes per second; SSDs: 1000/sec 

    ⚈  RAID helps by a multiple (the number of drives) 

    ⚈  RAID with BBWC (BBU): writes are "free"; set sync_binlog = 1 

    ⚈  Use disk RAID striping instead of manually spreading files across drives 

    ⚈  O_DIRECT or O_ALL_DIRECT (if drive directly connected) 

    ⚈  Elevator: Noop or Deadline, not CFQ 

    ⚈  SSD: innodb_flush_neighbors = 0mount nobarrier 

    ⚈  Filesystem: XFS or ZFS ↣ 

    ⚈  Hardware cannot make up for schema/query design flaws. 

    ⚈  5.6 claims to be good to 48 cores - YMMV; 5.7 claims 64. 

    ⚈  A single connection will not use more than one core. Not even with UNION or PARTITION

    ⚈  Don't put a cache in front of a cache 

    ⚈  10x speed up when disk blocks are cached, so... Time a query twice -- first will get things cached, second will do no I/O. ↣ 

    ⚈  Benchmark with SELECT SQL_NO_CACHE ... to avoid Query cache. ↣ 

Because I/O is so much slower than CPUs, the first few queries on a 'cold' system take longer. This can be demonstrated by running a SELECT twice. (Be sure to avoid the Query cache.) The first run will fetch data from disk; the second will find everything cached -- roughly 10x faster. 

"Count the disk hits": For large databases, the I/O dominates the time consumed -- often by a factor of 10. So, focus on how to minimize I/O, "cluster" data, create indexes that are more effective, etc. 

Tentative ZFS settings: zfs_prefetch_disable=1primarycache=metadata(because O_DIRECT is not available on ZFS), recordsize=16Klogbias=throughputinnodb_checksum=0 or innodb_checksum_algorithm=none(ZFS does the checksumming already), innodb_doublewrite=0



PXC / Galera

RoTsDiscussion

    ⚈  InnoDB only; always have a PRIMARY KEY 

    ⚈  Check for errors, even after COMMIT 

    ⚈  For optimal performance, use 'medium-sized' transactions 

    ⚈  Cross-colo replication may be faster or slower than traditional replication 

    ⚈  AUTO_INCREMENT values won't be consecutive 

    ⚈  Handle "critical reads" using wsrep_causal_reads 

    ⚈  ALTERs need to be handled differently (see RSU vs TOI) 

    ⚈  Lots of tricks are based on: remove a node from cluster + do stuff + add back to cluster 

    ⚈  Minimal HA: 1 node in each of 3 datacenters; one could be just a grabd 
Most of these RoTs apply to Group Replication / InnoDB Cluster, though the terminology may be different. 

A few things need to be done differently when using Galera-based systems, such as Percona XtraDB Cluster. 

Since there is one inter-node action per transaction, medium-sized transactions are a good tradeoff between inter-node delays and prompt replication. 

Trust nodes to heal themselves (via SST or IST); this leads to significantly lowered manual intervention for dead nodes, etc. 

Critical Reads are no longer a problem, except for the minor code change. 

More discussion



Data Warehouse

RoTsDiscussion

    ⚈  Create Summary Table(s) ("materialized views") ↣ 

    ⚈  Normalize, but don't over-normalize. 

    ⚈  The average of averages is (usually) mathematically incorrect. ↣ 

    ⚈  InfoBright (3rd party) -- 10:1 compression; all columns automatically indexed 

    ⚈  TokuDB (3rd party) -- 3:1 compression; faster loading ("fractal" technology) 

    ⚈  PARTITION BY RANGE the Fact table on a unit of time (to make DELETE of old data efficient). 

    ⚈  Use the smallest practical datatype for each field (to shrink the 'fact' table). 

    ⚈  Use InnoDB. That way, recovery from a power failure will be fast and painless. 

    ⚈  Don't have any indexes other than an AUTO_INCREMENT PRIMARY KEY for the fact table. That way, INSERTs into it will be fast. Periodic augmentation of the summary table(s) can use that to keep track of where they "left off". 

    ⚈  "Sharding" (splitting data across multiple machines) is mostly do-it-yourself. Or... Clustrix, Spider 

    ⚈  Look into 3rd party solutions such as ColumnStore or TokuDB in MariaDB 
Data Warehousing usually has "reports". These tend to be COUNTs, SUMs, AVERAGEs, etc, of large amounts of data, broken down by hours/days/weeks, together with some other "dimensions" like department, country, product, etc. 

Doing such reports against the raw ("Fact") table is costly because of the I/O to read lots of that table. Creating and maintaining "Summary table" is a technique for generating reports much more efficiently (typically 10x-100x faster). 

A Summary table has usually has, say, PRIMARY KEY(product, day), plus other columns that are COUNTs, SUMs, etc, of metrics for the given product+day. A report reads the Summary table, not the Fact table, and it finishes any further arithmetic. A Summary table based on days can be used to genearte a weekly report by suitable SUMs and GROUP BY. AVERAGEs should be done by SUM(sum)/SUM(count). 

Data Warehouse
Summary Tables



Miscellany

RoTsDiscussion

    ⚈  MySQL can run 1000 qps. (just a RoT; YMMV) ↣ 

    ⚈  The SlowLog is the main clue into performance problems. Keep it on. Use long_query_time=2. 

    ⚈  Leave log_queries_not_using_indexes off -- it clutters the slowlog 

    ⚈  1000+ Databases or tables is a clue of poor schema design 

    ⚈  10,000+ Databases or tables will run very slowly because of OS overhead 

    ⚈  < 10% improvement -- don't bother. Exception: shrink the datatypes before deploying 

    ⚈  Do not normalize "continuous" values -- dates, floats, etc -- especially if you will do range queries. 

    ⚈  Beware of SQL injection. ↣ 

    ⚈  If you can't finish an InnoDB transaction in 5 seconds, redesign it. 

    ⚈  MySQL has many builtin 'hard' limits; you will not hit any of them. ↣ 

    ⚈  An excessive MaxClients (Apache) can cause trouble with max_connections 

    ⚈  Connection pooling is generally not worth the effort. (Not to be confused with 5.7's Thread Pooling.) 

    ⚈  Use Row Based Replication (RBR) 

    ⚈  A Slave can have only one Master. (Exceptions: 5.7, MariaDB, ~Galera) 

    ⚈  If you need to do multiple ALTERs, study the documentation for your version. Some cases run faster by doing all in a single ALTER; some run faster separately with INPLACE. ↣ 

    ⚈  "Load average" often raises false alarms. 

    ⚈  Pick carefully between REPLACE (== DELETE + INSERT), INSERT IGNORE, and INSERT ... ON DUPLICATE KEY UPDATE (aka "IODKU"). 

    ⚈  When Threads_running > 10, you may be in serious trouble. 

    ⚈  SHOW PROCESSLIST with some threads "Locked" -- some other thread is hogging something. 

    ⚈  SHOW PROCESSLIST may fail to show the locking thread -- it is Sleeping, but not yet COMMITted; possibly misuse of autocommit 

    ⚈  >90% CPU --> investigate queries/indexes. (The SlowLog also catches such.) 

    ⚈  >90% of one core -- since MySQL won't use multiple cores in a single connection, this indicates an inefficient query. (Eg, 12% overall on an 8-core box is probably consuming one core.) 

    ⚈  >90% I/O -- tuning, overall schema design, missing index, etc. 

    ⚈  "NoSQL" is a catchy phrase looking for a definition. By the time NoSQL gets a definition, it will look a lot like an RDBMS solution. 
MySQL can run thousands of trivial queries on modern hardware. Some special benchmarks have driven InnoDB past 100K qps. At the other extreme, I have seen a query run for a month. 1000 qps is simply a RoT that applies to a lot of systems; but your mileage can really vary a lot. 

Normalizing dates runs afoul of the 20% RoT, plus making it impossible to do range scans. Over-normalization can lead to inefficiencies. Why have a 4-byte INT as an id for the 200 countries in the world; simply use a 2-byte  country_code CHAR(2) CHARSET ascii

SQL Injection is where you take user input (say, from an HTML form) and insert it verbatim into a SQL statement. Some hacker will soon find that your site is not protecting itself and have his way with your data. 

ALTER, in some situations, completely copies over the table and rebuilds all the indexes. For a huge table, the can take days. Doing two ALTERs means twice the work; A single ALTER statement with several operations in it. OPTIMIZE is similarly costly, and may not provide much benefit. 5.7 'INPLACE'

MySQL 5.7's JSON and MariaDB 5.3's "Dynamic Columns" eats into a big excuse for "NoSQL". 

MySQL's 'hard' limits



Postlog

Refreshed -- Oct, 2013;   Galera added -- Nov, 2013;   Refreshed -- March, 2017;   Refreshed -- Sep, 2017


-- Rick James

转载于:https://www.cnblogs.com/DataArt/p/10092631.html

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值