CP10.How to backups and Recovery in MySQL?

Physical and logical Backups

Logical backups operate on the internal   structures : Databases(Schemas),talbles,views,users,and other objects.

Pyhisical backups are concerned with OS-side representation of the database structures:data files,transaction journals. and so on.

Logical Backups: Logical backups are connected with the actual data,and not its physical representation.

here are some examples of logical backups:

  • Table data queried and saved into an external .csv file using the SELECT ...INTO OUTFILE statement that we cover.
  • A table or any other object's definition saved as a SQL statement.
  • One or more INSERT SQL statements that,run against a database and an empty table,would populate that able up to a preserved state.
  • A recording of all statements ever run that touched a particular table or database and modified data or schema objects.By this we mean DML and DDL commands.f

Recovery of a logical backup is usually done by executing one or more SQL statements.Continuing with our earlier examples,let's review the options  for recovery:

  • Data from a .csv file can be loaded into  a table using the LOAD DATA INFILE command.
  • The table can be created or re-created by running a DDL SQL statement.
  • INSERT SQL statements can be executed using the mysql CLI or any other client.
  • A replay of all the statements run in a database will restore it to its state after the last statement.

Physical Backups are all about data as in operating system files and internal RDBMS workings.

Some example of physical backups include the following:

  • A cold database directory copy,meaning it's done when the database is shut down (as opposed to a hot copy,done while the database is running.)
  • A storage snapshot of volumes and filesystems used by database.
  • A copy of table data files.
  • A stream of changes to database data files of some form.Most RDBMSs use a stream like this for crash recovery,and sometimes for replication,InnoDB's redo log is similar concept.

recovery options for the previous examples:

  • A cold copy can be moved to a desired location or server and then used as a data directory by a MySQL instance,old or new.
  • A snapshot can be restored in place or on another volume and then used by MySQL.
  • Table files can be put in place of existing ones.
  • A replay of the changes stream against the data files will recover their state to the last point in time.

Overview of Logical and Physical Backups

Properties of logical backups:

      • Contain a description and the contents of the logical structures

      • Are human-readable and editable

      • Are relatively slow to take and restore

Logical backup tools are:

        • Very flexible, allowing you to rename objects, combine separate sources, perform partial restores, and more

       • Not usually bound to a specific database version or platform

       • Able to extract data from corrupted tables and safeguard from corruption

       • Suitable for backing up remote databases (for example, in the cloud)

Properties of physical backups:

       • Are byte-by-byte copies of parts of data files, or entire filesystems/volumes

       • Are fast to take and restore

       • Offer little flexibility and will always result in the same structure upon restore

       • Can include corrupted pages

Physical backup tools are:

       • Cumbersome to operate

       • Usually don’t allow for an easy cross-platform or even cross-version portability

       • Cannot back up remote databases without OS access

Replication as a Backup Tool

The specifics of replications are such that they result in a full or partial copy of a target database.

The mysqldump Program

mysql> 
mysql> show create table sakila.actor\G
*************************** 1. row ***************************
       Table: actor
Create Table: CREATE TABLE `actor` (
  `actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
  `first_name` varchar(45) NOT NULL,
  `last_name` varchar(45) NOT NULL,
  `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`actor_id`),
  KEY `idx_actor_last_name` (`last_name`)
) ENGINE=InnoDB AUTO_INCREMENT=602 DEFAULT CHARSET=utf8mb4
1 row in set (0.00 sec)

mysql> select concat("INSERT INTO actor VALUES",
    -> "(",actor_id,",",first_name,",",
    -> last_name,"','",last_update,"');")
    -> as insert_statement from actor limit 1\G
*************************** 1. row ***************************
insert_statement: INSERT INTO actor VALUES(1,PENELOPE,GUINESS','2006-02-15 04:34:33');
1 row in set (0.00 sec)

mysql> exit;
Bye

In the following example,mysqldump is invoked without output redirection,and the tool will print all the statements to standard output.

[root@DBAMAXWELL /]# mysqldump -u root -p sakila > sakila_dump.sql

This can be useful to create a logical clone of the database.it is possible to generate a dump that has no data. Flexibility like this is one of the key features of logical backups and mysqldump.

[root@DBAMAXWELL /]# mysqldump -u root -p --no-data sakila > sakila_dump_no_data20220314.sql

It's also possible to create a dump of a single table in a database.In the next example,sakila is the database and category is the target tables.

[root@DBAMAXWELL /]# mysqldump -u root -p sakila category > sakila_target_table_category20220314.sql

Turning the flexibility up a notch,you can dump just a few rows from a table by specifying the --where or -w argument.As the name suggests, the syntax is the same as for the WHERE clause in a SQL statement:

[root@DBAMAXWELL /]# mysqldump -u root -p sakila actor --where="actor_id > 195" > sakila_target_table_actor_actor_id_more_than195_20220314.sql  

Sometimes it's necessary to output every database,evey object,and even every user.mysqldump is capable of that.The following command will effecitvely create a full and complete logical backup of a databbase instance:

[root@DBAMAXWELL /]# mysqldump -u root -p --all-databases --triggers --routines --events > dump_alldb_trig_routi_events.sql

Triggers are dumped by default,so this option won't appear in future command outputs.In the event you don't want to dump triggers,you can use --no-triggers.

There are a couple of problems with this command, however,First,even though we have redirected the output of the command to a file.the resulting file can be huge.Fortunately,its contents are likely going to be well suited for compression,though this depends on the actual data. Regardless,it's good idea to compress the output.

[root@DBAMAXWELL /]# mysqldump -u root -p --all-databases --routines --events | gzip > dump_alldb_routi_event.sql.gz

The basic command  to make a dump of a system using mainly InnoDB tables,which guaratees limited impact on concurrent writes,is as follows:

[root@DBAMAXWELL /]# mysqldump -u root -p --single-transaction --all-databases --routines --events | gzip > dump_single_trans_alldb_routi_event.sql.gz

 In general,using mysqldump and the logical backups it produces allows for the following:

  • Easy transfer of the data between enviroments.
  • Editing of the data in place both by humans and programs.
  • Finding certain data file corruptions
  • Transfer of the data between major database versions,different platforms,and even databases.

Loading Data from a SQL Dump File

Let's take a look at a simple example with a single 

[root@DBAMAXWELL /]# mysqldump -u root -p sakila > /tmp/dump_sakila_20220314.sql
Enter password: 
[root@DBAMAXWELL /]# mysql -u root -p -e "CREATE DATABASE sakila_mod"
Enter password: 
[root@DBAMAXWELL /]# mysql -u root -p sakila_mod < /tmp/dump_sakila_20220314.sql
Enter password: 
[root@DBAMAXWELL /]# mysql -u root -p sakila_mod -e "SHOW TABLES"
Enter password: 
+----------------------------+
| Tables_in_sakila_mod       |
+----------------------------+
| actor                      |

mysqlpump

mysqlpump is a utility program bundled with MySQL version 5.7 and later that improves mysqldump in serveral areas,mainly around performance and usability. The key differentiators are as follows:

  • Pallallel dump capability
  • Built-in dump compression
  • Improved restore performance though delayed creation of secondary indexes
  • Easier control over what objects are dumped
  • Modified behavior of dumping user accounts.

mysqlpump will default to dumping all of the databases(excluding INFORMATION_SCHEMA,performance_schema,ndbinfo,and the sys schema)

[root@DBAMAXWELL /]# mysqlpump -u root -p > pump_20220314.out
Enter password: 
Dump progress: 1/1 tables, 0/0 rows
Dump progress: 6/68 tables, 743877/4014303 rows
Dump progress: 7/73 tables, 1785685/4019828 rows
Dump progress: 52/73 tables, 2734903/4019828 rows
Dump progress: 72/73 tables, 3769057/4019828 rows
Dump completed in 4430 milliseconds
[root@DBAMAXWELL /]#

Let's combine all the new features to produce a compressed dump of non-system databases and user definitions,and use concurrency in the process:

[root@DBAMAXWELL /]# mysqlpump -u root -p --compress-output=zlib --include-users=bob,kate --include-databases=sakila,nasa,employees --parallel-schemas=2:employees --parallel-schemas=sakila,nasa > pump_2_20220314.out 
Enter password: 
Dump progress: 1/2 tables, 0/331579 rows
Dump progress: 23/27 tables, 200381/3961373 rows
Dump progress: 23/27 tables, 394381/3961373 rows
Dump progress: 23/27 tables, 594131/3961373 rows
Dump progress: 25/27 tables, 825508/3961373 rows
Dump progress: 25/27 tables, 1084258/3961373 rows
Dump progress: 25/27 tables, 1335508/3961373 rows
Dump progress: 26/27 tables, 1598566/3961373 rows
Dump progress: 26/27 tables, 1871566/3961373 rows
Dump progress: 26/27 tables, 2134066/3961373 rows
Dump progress: 26/27 tables, 2404566/3961373 rows
Dump progress: 26/27 tables, 2634566/3961373 rows
Dump progress: 26/27 tables, 2897816/3961373 rows
Dump progress: 26/27 tables, 3136816/3961373 rows
Dump progress: 26/27 tables, 3402066/3961373 rows
Dump progress: 26/27 tables, 3665066/3961373 rows
Dump progress: 26/27 tables, 3931566/3961373 rows
Dump completed in 17369 milliseconds
[root@DBAMAXWELL /]# 

mysqlpump is an improvement over mysqldump and adds important concurrency,compression,and object control features.

mydumper and myloader

mydumper and myloader are both part of the open source project mydumper. This set of tools attempts to make logical backups more performant,easier to manage,and more human-friendly.

if you want to install mydumper and myloader ,please reference below link:

https://blog.csdn.net/u011868279/article/details/123490724

To create the dump and explore it , execute the following commands:

[root@DBAMAXWELL ~]# mydumper --version          
mydumper 0.10.3, built against MySQL 5.7.33-36
[root@DBAMAXWELL ~]# myloader --version
myloader 0.10.3, built against MySQL 5.7.33-36
[root@DBAMAXWELL ~]# mydumper -u root -a
Enter MySQL Password: 
[root@DBAMAXWELL ~]# ls -ltr
total 2708
drwxr-xr-x  2   500   500      23 Feb  1 07:06 world-db
drwxr-xr-x  2   500   500      72 Feb  1 07:06 sakila-db
-rw-r--r--  1 root  root  2609803 Feb  3 17:45 get-pip.py
-rw-r--r--  1 root  root     4942 Feb 10 21:53 wget-log
drwxr-xr-x 18 admin admin    4096 Feb 11 14:01 Python-3.9.1
-rw-r--r--  1 root  root        0 Feb 15 10:54 first.db
drwxr-xr-x  5 root  root     4096 Feb 22 14:00 test_db
drwxr-xr-x  2 root  root        6 Mar  6 11:37 learning_bash_shell
drwxr-xr-x 10 root  root      322 Mar 11 23:06 mysql-sys
-rw-r--r--  1 root  root   116480 Mar 12 11:44 NASA_Facilities.csv
drwx------  2 root  root    20480 Mar 14 22:57 export-20220314-225738

mydumper has some more advanced features:

  • Lightweight backup locks support.mydumper utilizes these locks by default when possible.These locks do not block concurrent reads and writes to InnoDB tables,but will block any DDL statements,which could otherwise render the backup invalid.
  • Use of savepoints.mydumper uses a trick with transaction savepoints to minimize metadata locking.
  • Limits on duration of metadata locking.mydumper allows two options:failing quickly or killing long-running queries that prevent mydumper from succeeding.

mydumper and myloader are advanced tools taking logical backup capabilities to the maximum.

Cold Backup and Filesystem Snapshots

A cold backup is really just a copy of the data directory and other necessary files,done while the database instance is down.

The cold backup still has its good points :

  • Very fast(arguably the fastest backup method apart from snapshots)
  • Straightforward
  • Easy to use,hard to do wrong.

This makes it somewhat diffcult to get the snapshot backup right.There are two options:

  • Cold backup snapshot(db is shut down)
  • Hot backup snapshot(db is running,taking a snapshot correctly is a greater challenge than when the database is down,)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值