记录mysql数据库的执行语句

翻译 2015年11月18日 23:43:48

mysql> set global log_output=’table’;
Query OK, 0 rows affected (0.00 sec)
Note that this will tell MySQL to push all traced SQL into their appropriate tables, which are based off the CSV (comma separated value) engine. One nice aspect of the CSV tables that I like is that you can take the underlying files and directly import them into Microsoft Excel or OpenOffice Calc to prepare reports and the like.

Next, to enable tracing simply issue another command from a MySQL client prompt. For example, to trace all SQL sent to a MySQL Server, you can do the following:

mysql> set global general_log=1;
Query OK, 0 rows affected (0.00 sec)
You can then trace to your heart’s content and once you’re satisfied with what you’ve got, just disable tracing again:

mysql> select count(*) from gim2.broker;
+———-+
| count(*) |
+———-+
| 23 |
+———-+
1 row in set (0.00 sec)

mysql> set global general_log=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from mysql.general_log\G
***************** 1. row *****************
event_time: 2009-01-21 09:53:03
user_host: root[root] @ [154.231.0.3]
thread_id: 27
server_id: 0
command_type: Query
argument: select count(*) from gim2.broker
***************** 2. row *****************
event_time: 2009-01-21 09:53:08
user_host: root[root] @ [154.231.0.3]
thread_id: 27
server_id: 0
command_type: Query
argument: set global general_log=0
2 rows in set (0.00 sec)
Admittedly, the MySQL general query log doesn’t contain a wealth of diagnostic information to use:

mysql> desc mysql.general_log;
+————–+————-+——+—–+——————-+
| Field | Type | Null | Key | Default |
+————–+————-+——+—–+——————-+
| event_time | timestamp | NO | | CURRENT_TIMESTAMP |
| user_host | mediumtext | NO | | NULL |
| thread_id | int(11) | NO | | NULL |
| server_id | int(11) | NO | | NULL |
| command_type | varchar(64) | NO | | NULL |
| argument | mediumtext | NO | | NULL |
+————–+————-+——+—–+——————-+
This isn’t to say there’s no value there – you can certainly ascertain workload patterns of queries and their originating source along with other diagnostics from parsing the argument column (e.g. how much a table is accessed, etc.) But, the slow query log/table contains better stats for troubleshooting SQL:

mysql> desc mysql.slow_log;
+—————-+————–+——+—–+——————-+
| Field | Type | Null | Key | Default |
+—————-+————–+——+—–+——————-+
| start_time | timestamp | NO | | CURRENT_TIMESTAMP |
| user_host | mediumtext | NO | | NULL |
| query_time | time | NO | | NULL |
| lock_time | time | NO | | NULL |
| rows_sent | int(11) | NO | | NULL |
| rows_examined | int(11) | NO | | NULL |
| db | varchar(512) | NO | | NULL |
| last_insert_id | int(11) | NO | | NULL |
| insert_id | int(11) | NO | | NULL |
| server_id | int(11) | NO | | NULL |
| sql_text | mediumtext | NO | | NULL |
+—————-+————–+——+—–+——————-+
Tracing ‘slow’ running SQL is similar to collecting all SQL but there are one or two extra steps depending on the criteria you want to use. First, you need to tell MySQL how ‘slow’ a query needs to be in order to be collected. This is currently indicated in seconds if you want to trace to a table, but it can also be expressed in microseconds if you want to send the output to a file. We’ll just trace to a table for now and set the criteria to one second:

mysql> set global long_query_time=1;
Query OK, 0 rows affected (0.00 sec)
Once you do this, you can then start tracing slow running SQL by issuing this command:

mysql> set global slow_query_log=1;
Query OK, 0 rows affected (0.00 sec)
Now you’ll be capturing all SQL that takes longer than one second to execute – such as this one:

mysql> select b.client_id,
-> b.client_first_name,
-> b.client_last_name,
-> (select sum(number_of_units * price)
-> from client_transaction c
-> where c.action = ‘buy’ and
-> c.client_id = b.client_id) -
-> (select sum(number_of_units * price)
-> from client_transaction d
-> where d.action = ‘sell’ and
-> d.client_id = b.client_id) portfolio_value
-> from client_transaction a,
-> client b
-> where a.client_id = b.client_id
-> group by b.client_id,
-> b.client_first_name,
-> b.client_last_name
-> having portfolio_value > 1000000;
+———–+——————-+——————+—————–+
| client_id | client_first_name | client_last_name | portfolio_value |
+———–+——————-+——————+—————–+
| 5 | ABNER | ROSSELLETT | 1252115.50 |
| 500 | CANDICE | BARTLETT | 1384877.50 |
+———–+——————-+——————+—————–+
2 rows in set (3.70 sec)
mysql> select * from mysql.slow_log\G
***************** 1. row *****************
start_time: 2009-01-21 10:10:50
user_host: root[root] @ [154.231.0.3]
query_time: 00:00:03
lock_time: 00:00:00
rows_sent: 2
rows_examined: 1323751
db: gim2
last_insert_id: 0
insert_id: 0
server_id: 0
sql_text: select b.client_id,
b.client_first_name,
b.client_last_name,
(select sum(number_of_units * price)
from client_transaction c
where c.action = ‘buy’ and
c.client_id = b.client_id) -
(select sum(number_of_units * price)
from client_transaction d
where d.action = ‘sell’ and
d.client_id = b.client_id) portfolio_value
from client_transaction a,
client b
where a.client_id = b.client_id
group by b.client_id,
b.client_first_name,
b.client_last_name
having portfolio_value > 1000000
1 row in set (0.02 sec)
Notice again that you’ve got better information to work with in the slow_log table in terms of understanding what’s going on underneath the covers – you can see how many rows were examined, the database being accessed, elapsed time, and more.

来自:http://dev.mysql.com/tech-resources/articles/mysql_51_diagnostic_tools.html

第一种:查Slow query的SQL语法:
log_slow_queries = /var/log/mysql/mysql-slow.log
long_query_time = 2 (超过2秒的SQL语法记录起来,设短一点来记录除错也是一种方法.)

第二种:设MySQL Replication用binlog:
log_bin = /var/log/mysql/mysql-bin.log (此档要用mysqlbinlog解来看,
mysqlbinlog mysql-bin.000042| grep “T_ABC” | grep “column value”
)
mysql会将所有INSERT/UPDATE/DELETE语法记于此(但是语法可能跟你想的不同),这是要写给SLAVE用的log 文件

第三种:推荐此方法,将MySQL执行的每行指令全都记录起来:
log = /tmp/mysql.log
restart mysql后, tail -f /tmp/mysql.log就可以看到
来自:http://blog.csdn.net/kenera/article/details/5624981
这里还有一篇:http://blog.csdn.net/mchdba/article/details/38236833

相关文章推荐

MySQL数据库中,如何记录SQL执行语句

今天有个同事让我帮忙看一张报表, 这张报表是一个开源的系统产生出来的,后台数据库用的是MySQL.问题是一张报表的数据是错误的,当时另外一张报表是对的,所以希望能够找出后台逻辑,这样可以对报表进行修改...

MySQL数据库中查找执行效率慢的SQL语句并KILL ID

MySQL数据库中查找执行效率慢的SQL语句并KILL ID 启动Mysql时加参数--log-slow-queries来记录执行时间超过long_query_time秒的sql: ...

Shell脚本中执行sql语句,操作mysql数据库

对于自动化运维,诸如备份恢复之类的,DBA经常需要将SQL语句封装到shell脚本。本文描述了在Linux环境下mysql数据库中,shell脚本下调用sql语句的几种方法,供大家参考。对于脚本输出的...

mySQL数据库Sql语句执行效率检查--Explain命令

Explain命令在解决数据库性能上是第一推荐使用命令,大部分的性能问题可以通过此命令来简单的解决,Explain可以用来查看SQL语句的执行效 果,可以帮助选择更好的索引和优化查询语句,写出更好的优...

mysql数据库用source命令导入.sql文件,执行SQL语句

一、准备.sql文件下面是我做局域网QQ聊天系统写好的一个qq.sql文件,文件内容如下,在通过执行这个文件,我创建了6个数据库表:-- 存放所有已经注册用户的信息 CREATE TABLE tbl_...

mysql数据库 sql语句学习笔记03 查询 聚合 表连接 记录联合

聚合     SELECT [field1,field2 , ....] function_name FROM tablename [WHERE where_contition] [GROUP BY ...

逆向生成MySQL数据库对应的建表与Insert语句(可完成记录的迁移与备份)

我们在项目开发中,数据库表中的有些记录需要备份或者拷贝给他人使用,而觉得备份与还原数据库的方法又比较麻烦,便可以使用Navicat这个MySQL管理工具根据当前数据库中的记录逆向生成相对应的建表与In...

【mysql】mysql数据库Sql语句执行效率检查–Explain命令

Explain命令在解决数据库性能上是第一推荐使用命令,大部分的性能问题可以通过此命令来简单的解决,Explain可以用来查看SQL语句的执行效 果,可以帮助选择更好的索引和优化查询语句,写出更好的优...

MySQL数据库SQL语句最佳实践案例

  • 2016年11月30日 13:30
  • 388KB
  • 下载

mysql数据库基本知识和语句

  • 2016年11月05日 16:35
  • 324KB
  • 下载
内容举报
返回顶部
收藏助手
不良信息举报
您举报文章:记录mysql数据库的执行语句
举报原因:
原因补充:

(最多只允许输入30个字)