howtos:mysql_optimize
no way to compare when less than two revisions
Differences
This shows you the differences between two versions of the page.
— | howtos:mysql_optimize [02/12/2018 21:34] (current) – created - external edit 127.0.0.1 | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | MySQL Data Fragmentation - What, When and How | ||
+ | March 11, 2011 | ||
+ | MySQL tables, including MyISAM and InnoDB, two of the most common types, experience fragmentation as data is inserted and deleted randomly. Fragmentation can leave large holes in your table, blocks which must be read when scanning the table. Optimizing your table can therefore make full table scans and range scans more efficient. | ||
+ | |||
+ | Fragmentation - an example | ||
+ | MySQL has quite a few different storage engines to store data in tables. Whenever MySQL deletes rows from your table, the space left behind is then empty. Over time with a lot of DELETEs, this space can grow larger than the used space in your table. When MySQL goes to scan that data, it scans to the high water mark of the table, that is the highest point at which data has been added. If new inserts occur, MySQL will try to use that space, but nevertheless gaps will persist. | ||
+ | |||
+ | This extra fragmented space can make reads against the table less efficient than they might otherwise be. Let's look at an example. | ||
+ | |||
+ | We'll create a database (sometimes called a schema) and a test table: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [test]> create database frag_test; | ||
+ | Query OK, 1 row affected (0.03 sec) | ||
+ | |||
+ | (root@localhost) [test]> use frag_test; | ||
+ | Database changed | ||
+ | |||
+ | (root@localhost) [frag_test]> | ||
+ | Query OK, 0 rows affected (0.05 sec) | ||
+ | |||
+ | </ | ||
+ | |||
+ | Next let's add some rows to the table: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [frag_test]> | ||
+ | Query OK, 1 row affected (0.01 sec) | ||
+ | |||
+ | (root@localhost) [frag_test]> | ||
+ | Query OK, 1 row affected (0.00 sec) | ||
+ | |||
+ | (root@localhost) [frag_test]> | ||
+ | Query OK, 1 row affected (0.00 sec) | ||
+ | </ | ||
+ | |||
+ | Now we'll check for fragmentation: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [frag_test]> | ||
+ | *************************** 1. row *************************** | ||
+ | Name: frag_test | ||
+ | | ||
+ | Version: 10 | ||
+ | | ||
+ | Rows: 3 | ||
+ | | ||
+ | Data_length: | ||
+ | Max_data_length: | ||
+ | | ||
+ | Data_free: 0 | ||
+ | | ||
+ | Create_time: | ||
+ | Update_time: | ||
+ | | ||
+ | Collation: latin1_swedish_ci | ||
+ | | ||
+ | | ||
+ | Comment: | ||
+ | 1 row in set (0.00 sec) | ||
+ | </ | ||
+ | Now let's delete a row and check again: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [frag_test]> | ||
+ | Query OK, 1 row affected (0.00 sec) | ||
+ | |||
+ | (root@localhost) [frag_test]> | ||
+ | *************************** 1. row *************************** | ||
+ | Name: frag_test | ||
+ | | ||
+ | Version: 10 | ||
+ | | ||
+ | Rows: 2 | ||
+ | | ||
+ | Data_length: | ||
+ | Max_data_length: | ||
+ | | ||
+ | Data_free: 20 | ||
+ | | ||
+ | Create_time: | ||
+ | Update_time: | ||
+ | | ||
+ | Collation: latin1_swedish_ci | ||
+ | | ||
+ | | ||
+ | Comment: | ||
+ | 1 row in set (0.00 sec) | ||
+ | </ | ||
+ | |||
+ | Notice the " | ||
+ | |||
+ | Eliminating fragmentation | ||
+ | Luckily MySQL comes with a simple way to fix this once you've identified it. It's called optimize table. Take a look: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [frag_test]> | ||
+ | +---------------------+----------+----------+----------+ | ||
+ | | Table | Op | Msg_type | Msg_text | | ||
+ | +---------------------+----------+----------+----------+ | ||
+ | | frag_test.frag_test | optimize | status | ||
+ | +---------------------+----------+----------+----------+ | ||
+ | 1 row in set (0.00 sec) | ||
+ | |||
+ | (root@localhost) [frag_test]> | ||
+ | *************************** 1. row *************************** | ||
+ | Name: frag_test | ||
+ | | ||
+ | Version: 10 | ||
+ | | ||
+ | Rows: 2 | ||
+ | | ||
+ | Data_length: | ||
+ | Max_data_length: | ||
+ | | ||
+ | Data_free: 0 | ||
+ | | ||
+ | Create_time: | ||
+ | Update_time: | ||
+ | | ||
+ | Collation: latin1_swedish_ci | ||
+ | | ||
+ | | ||
+ | Comment: | ||
+ | 1 row in set (0.00 sec) | ||
+ | </ | ||
+ | |||
+ | Performance considerations | ||
+ | OPTIMIZE TABLE will lock the entire table while doing its work. For small tables this will work fine because the whole table can be read and written quickly. For very large tables, this can take a very long time and interrupt or reduce available to your application. What to do? | ||
+ | |||
+ | Luckily MySQL has a great feature called Master-Master replication. In this configuration, | ||
+ | |||
+ | Now that the roles are switched and the application is happily pointing at db2, perform the same OPTIMIZE TABLE operation on db1. It's now the passive database, so it won't interrupt the primary either. | ||
+ | |||
+ | Other commands | ||
+ | Show all the fragmented tables in your database: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [(none)]> | ||
+ | +--------------+-----------------------------+-----------+--------+ | ||
+ | | table_schema | table_name | ||
+ | +--------------+-----------------------------+-----------+--------+ | ||
+ | | aitc | wp_comments | ||
+ | | aitc | wp_options | ||
+ | | aitc | wp_postmeta | ||
+ | | cactidb | ||
+ | | cactidb | ||
+ | | drupal | ||
+ | | drupal | ||
+ | | drupal | ||
+ | | gg | wp_comments | ||
+ | | gg | wp_options | ||
+ | | gg | wp_postmeta | ||
+ | | ihi | wp_comments | ||
+ | | ihi | wp_options | ||
+ | | ihi | wp_postmeta | ||
+ | | ihi | wp_redirection_items | ||
+ | | ihi | wp_redirection_logs | ||
+ | | nds | wp_comments | ||
+ | | nds | wp_options | ||
+ | | nds | wp_postmeta | ||
+ | | oos | wp_comments | ||
+ | | oos | wp_options | ||
+ | | oos | wp_postmeta | ||
+ | | phplist | ||
+ | | phplist | ||
+ | | phplist | ||
+ | | phplist | ||
+ | | phplist | ||
+ | | phplist | ||
+ | | pn_nds | ||
+ | | psa | exp_event | ||
+ | | test | active_sessions | ||
+ | +--------------+-----------------------------+-----------+--------+ | ||
+ | 31 rows in set (0.26 sec) | ||
+ | </ | ||
+ | |||
+ | You can also cause a table to be defragmented if you change the storage engine. That's because MySQL has to read the entire table, and write it back to disk using the new storage engine, and in the process the rows and data gets compressed down more efficiently. | ||
+ | |||
+ | Here's what that looks like: | ||
+ | |||
+ | < | ||
+ | (root@localhost) [frag_test]> | ||
+ | Query OK, 2 rows affected (0.17 sec) | ||
+ | Records: 2 Duplicates: 0 Warnings: 0 | ||
+ | |||
+ | (root@localhost) [frag_test]> | ||
+ | -> \G; | ||
+ | *************************** 1. row *************************** | ||
+ | Name: frag_test | ||
+ | | ||
+ | Version: 10 | ||
+ | | ||
+ | Rows: 2 | ||
+ | | ||
+ | Data_length: | ||
+ | Max_data_length: | ||
+ | | ||
+ | Data_free: 0 | ||
+ | | ||
+ | Create_time: | ||
+ | Update_time: | ||
+ | | ||
+ | Collation: latin1_swedish_ci | ||
+ | | ||
+ | | ||
+ | Comment: InnoDB free: 7168 kB | ||
+ | 1 row in set (0.00 sec) | ||
+ | </ | ||
+ | |||
+ | Conclusion | ||
+ | Fragmentation is something to keep an eye on but not necessarily to obsess over. If you see a few tables with a very large data_free value, it can be worthwhile to OPTIMIZE those, as read performance on the table will improve afterward. | ||
+ | |||
+ | |||
+ | |||
+ | ---- | ||
+ | Source: www.databasejournal.com/ |
howtos/mysql_optimize.txt · Last modified: 02/12/2018 21:34 by 127.0.0.1