SQL SERVER – How to Stop Growing Log File Too Big

http://blog.sqlauthority.com/

I was recently engaged in Performance Tuning Engagement in Singapore. The organization had a huge database and had more than a million transactions every hour. During the assignment, I noticed that they were truncating the transactions log. This really alarmed me so I informed them this should not be continued anymore because there’s really no need of truncating or shortening the database log. The reason why they were truncating the database log was that it was growing too big and they wanted to manage its large size. I provided two different solutions for them. Now let’s venture more on these solutions.

If you are jumping over this post to leave a comment, please read first the two options as follows:

1) Convert the Recovery Model to Simple Recovery

If you are truncating the transaction logs, this means you are breaking the T-Log LSN (Log Sequence Numbers). This follows that if disaster comes, you would not be able to restore your T-Logs and there would be no option for you to do point in time recovery. If you are fine with this situation and there is nothing to worry, I suggest that you change your recovery model to Simple Recovery Model. This way, you will not have extra ordinary growth of your log file.

2) Start Taking Transaction Log Backup

If your business does not support loss of data or requires having point in time recovery, you cannot afford anything less than Full Recovery Model. In Full Recovery Model, your transaction log will grow until you take a backup of it. You need to take the T-Log Backup at a regular interval. This way, your log would not grow beyond some limits. If you are taking an hourly T-Log backup, your T-Log would grow until one hour but after this the T-Log backup would truncate all the ‘committed’ transactions once you take it . Doing this would lead the size of the T-Log not to go down much, but it would rather be marked as empty for the next hour’s T-Log to populate.

With this method, you can restore your database at Point of Time if a disaster ever happens at your server.

Let us run an example to demonstrate this. In this case, I have done the following steps:

  1. Create Sample Database in FULL RECOVERY Model
  2. Take Full Backup (full backup is must for taking subsequent backup)
  3. Repeat Following Operation
    1. Take Log Backup
    2. Insert Some rows
    3. Check the size of Log File
  4. Clean Up

After a short while, you will notice that the Log file (ldf) will stop increasing but the size of the backup will increase.

If you have an issue with your log file growth, I suggest that you follow either of the above solutions instead of truncating it.

/* FULL Recovery and Log File Growth */
USE [master]
GO
-- Create Database SimpleTran
IF EXISTS ( SELECT name FROM sys.databases WHERE name = N'SimpleTran' )
BEGIN
ALTER DATABASE
[SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE ;
DROP DATABASE [SimpleTran]
END
GO
CREATE DATABASE [SimpleTran]
GO
-- Set Database backup model to FULL
ALTER DATABASE [SimpleTran] SET RECOVERY FULL
GO
BACKUP DATABASE [SimpleTran] TO  DISK = N'D:/SimpleTran.bak' WITH NOFORMAT , NOINIT , NAME = N'SimpleTran-Full Database Backup' , SKIP , NOREWIND , NOUNLOAD , STATS = 10
GO
-- Check Database Log File Size
SELECT DB_NAME ( database_id ) AS DatabaseName ,
Name AS Logical_Name ,
Physical_Name , ( size * 8 )/ 1024 SizeMB
FROM sys.master_files
WHERE DB_NAME ( database_id ) = 'SimpleTran'
GO
-- Create Table in Database with Transaction
USE SimpleTran
GO
IF EXISTS ( SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID ( N'[dbo].[RealTempTable]' ) AND TYPE IN ( N'U' ))
DROP TABLE [dbo].[RealTempTable]
GO
CREATE TABLE RealTempTable ( ID INT )
INSERT INTO RealTempTable ( ID )
SELECT TOP 50000 ROW_NUMBER () OVER ( ORDER BY a.name ) RowID
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
-- Check the size of the Database
SELECT DB_NAME ( database_id ) AS DatabaseName ,
Name AS Logical_Name ,
Physical_Name , ( size * 8 )/ 1024 SizeMB
FROM sys.master_files
WHERE DB_NAME ( database_id ) = 'SimpleTran'
GO
-- Take Full Backup
BACKUP LOG [SimpleTran] TO  DISK = N'D:/SimpleTran.bak' WITH NOFORMAT , NOINIT , NAME = N'SimpleTran-Transaction Log  Backup' , SKIP , NOREWIND , NOUNLOAD , STATS = 10
GO
-- Run following transaction multiple times and check the size of T-Log
INSERT INTO RealTempTable ( ID )
SELECT TOP 50000 ROW_NUMBER () OVER ( ORDER BY a.name ) RowID
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
-- Check the size of the Database
SELECT DB_NAME ( database_id ) AS DatabaseName ,
Name AS Logical_Name ,
Physical_Name , ( size * 8 )/ 1024 SizeMB
FROM sys.master_files
WHERE DB_NAME ( database_id ) = 'SimpleTran'
GO
/*
Now run following code multiple times.
You will notice that it will not increase the size of .ldf file but will for sure
increasethe size of the log backup.
*/
-- Second Time
-- START
BACKUP LOG [SimpleTran] TO  DISK = N'D:/SimpleTran.log' WITH NOFORMAT , NOINIT , NAME = N'SimpleTran-Transaction Log  Backup' , SKIP , NOREWIND , NOUNLOAD , STATS = 10
GO
-- Run following transaction and check the size of T-Log
INSERT INTO RealTempTable ( ID )
SELECT TOP 50000 ROW_NUMBER () OVER ( ORDER BY a.name ) RowID
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
-- Check the size of the Database
SELECT DB_NAME ( database_id ) AS DatabaseName ,
Name AS Logical_Name ,
Physical_Name , ( size * 8 )/ 1024 SizeMB
FROM sys.master_files
WHERE DB_NAME ( database_id ) = 'SimpleTran'
GO
-- END
--Clean Up
USE MASTER
GO
IF EXISTS ( SELECT name FROM sys.databases WHERE name = N'SimpleTran' )
BEGIN
ALTER DATABASE
[SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE ;
DROP DATABASE [SimpleTran]
END

If you run the code that is listed between START and END, you would get the following results almost every time:

This validates our earlier discussion. After seeing this article, the Singapore team implemented Log Backup instead of Log Truncate right away. Let me know what you think about this article.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
目标检测(Object Detection)是计算机视觉领域的一个核心问题,其主要任务是找出图像中所有感兴趣的目标(物体),并确定它们的类别和位置。以下是对目标检测的详细阐述: 一、基本概念 目标检测的任务是解决“在哪里?是什么?”的问题,即定位出图像中目标的位置并识别出目标的类别。由于各类物体具有不同的外观、形状和姿态,加上成像时光照、遮挡等因素的干扰,目标检测一直是计算机视觉领域最具挑战性的任务之一。 二、核心问题 目标检测涉及以下几个核心问题: 分类问题:判断图像中的目标属于哪个类别。 定位问题:确定目标在图像中的具体位置。 大小问题:目标可能具有不同的大小。 形状问题:目标可能具有不同的形状。 三、算法分类 基于深度学习的目标检测算法主要分为两大类: Two-stage算法:先进行区域生成(Region Proposal),生成有可能包含待检物体的预选框(Region Proposal),再通过卷积神经网络进行样本分类。常见的Two-stage算法包括R-CNN、Fast R-CNN、Faster R-CNN等。 One-stage算法:不用生成区域提议,直接在网络中提取特征来预测物体分类和位置。常见的One-stage算法包括YOLO系列(YOLOv1、YOLOv2、YOLOv3、YOLOv4、YOLOv5等)、SSD和RetinaNet等。 四、算法原理 以YOLO系列为例,YOLO将目标检测视为回归问题,将输入图像一次性划分为多个区域,直接在输出层预测边界框和类别概率。YOLO采用卷积网络来提取特征,使用全连接层来得到预测值。其网络结构通常包含多个卷积层和全连接层,通过卷积层提取图像特征,通过全连接层输出预测结果。 五、应用领域 目标检测技术已经广泛应用于各个领域,为人们的生活带来了极大的便利。以下是一些主要的应用领域: 安全监控:在商场、银行
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值