xposed 主动调用方法_操作方法:主动容量管理

xposed 主动调用方法

Capacity management is one of the most important tasks you have to perform as a DBA. Usually we rely on threshold alarms for the disks volumes and the databases, however I find it very suitable for me to take actions in advance – this way I am sure the SQL Servers under my supervision will have the needed space to operate without issues. And yes, there is the auto-grow, however it’s a safety mechanism which role is to ensure that even in critical situations you can have your database operational, however this does come with a negative performance impact which I try to avoid for my key production systems.

容量管理是作为DBA必须执行的最重要的任务之一。 通常,我们依赖于磁盘卷和数据库的阈值警报,但是我发现非常适合我提前采取措施-这样,我可以确保在我监督下SQL Server将具有所需的空间来正常运行。 是的,这里有自动增长功能,但是它是一种安全机制,它的作用是确保即使在紧急情况下数据库也可以运行,但这确实会对性能产生负面影响,我试图避免对关键产品的影响系统。

There are a lot of articles out there that provide you with a pretty detailed information about the size of databases, volumes, tables, partitions but none of them focus on the future usage. Except maybe two – the first one is from Microsoft Operations which focuses on the ‘days remaining’ metric and the second one from Chad Miller which we will use as a base and develop it further.

那里有很多文章可以为您提供有关数据库,卷,表,分区的大小的相当详细的信息,但是它们都不关注未来的使用。 除了两个以外,第一个来自Microsoft Operations,它专注于“剩余天数”度量标准 ,第二个来自Chad Miller,我们将以此为基础并对其进行进一步开发。

Why ‘days remaining’ is such an important metric for us? When you are dealing with tens of thousands of databases, thousands of disks you need to focus only on the objects that require your immediate attention.

为什么“剩余天数”对我们来说如此重要? 当您处理成千上万的数据库时,成千上万的磁盘只需要关注需要立即关注的对象。

Let us go true the steps for the deployment of the solution:

让我们实现部署解决方案的步骤:

  1. Create the database, tables, views and the stored procedures

    创建数据库,表,视图和存储过程
  2. Configure the SQL Servers for which we will capture data

    配置我们将为其捕获数据SQL Server
  3. Schedule the data collection for the volumes and the databases

    计划卷和数据库的数据收集
  4. Alternatively use SSIS to load data from external sources

    或者使用SSIS从外部源加载数据
  5. Configure data purity and data purge jobs

    配置数据纯度和数据清除作业

You can download the scripts you will need from here;

您可以从此处下载所需的脚本;

  1. Create the database, tables, views and the stored procedures

    创建数据库,表,视图和存储过程

    As a start we need to create a database to store our information, in my case I will name it capacity_management; Using the provided scripts (structure.sql) we will create the three tables – vol_space, db_space and server_space_lku, the views – db_space_change_vw, vol_space_change_vw and the stored procedures db_space_capacity_sp, vol_space_capacity_sp;

    首先,我们需要创建一个数据库来存储我们的信息,在我的情况下,我将其命名为Capacity_management;。 使用提供的脚本(structure.sql),我们将创建三个表– vol_space,db_space和server_space_lku,视图– db_space_change_vw,vol_space_change_vw和存储过程db_space_capacity_sp,vol_space_capacity_sp;

  2. Configure the SQL Servers for which we will capture data

    配置我们将为其捕获数据SQL Server

    The PowerShell scripts that we will configure later use the table server_space_lku and captures data for all SQL Servers listed in it. Thus considered that you have a the possibility to connect to all your of your SQL Servers.

    我们稍后将配置的PowerShell脚本使用表server_space_lku并捕获表中列出的所有SQL Server的数据。 因此,考虑到您有可能连接到所有SQL Server。

    Let us add Europe\Kaloyan01 as a SQL Server in it:

    让我们在其中添加Europe \ Kaloyan01作为SQL Server:

     
    insert server_space_lku (server_name) values (‘Europe\Kaloyan01’)
     
    

    If you want to reuse your CMS server and to query specific containers from it you can create the database capacity_management there and use the PowerShell script Write-DbSpaceToDb_CMS.ps1 instead of Write-DbSpaceToDb.ps1; the only requirement is to alter the ‘WHERE’ clause at the end of the script;

    如果要重用CMS服务器并从中查询特定的容器,则可以在其中创建数据库Capacity_management,并使用PowerShell脚本Write-DbSpaceToDb_CMS.ps1代替Write-DbSpaceToDb.ps1; 唯一的要求是在脚本末尾更改“ WHERE”子句;

    Example: in the provided *_CMS.ps1 scripts I am quering the servers in group ‘SQL2012’;

    示例:在提供的* _CMS.ps1脚本中,我正在查询“ SQL2012”组中的服务器;

     
    Get-SqlData $destServer $destDb  "SELECT server_name FROM (
    SELECT DISTINCT groups.name AS 'server_group_name',svr.server_name AS 'server_name'
    FROM msdb.dbo.sysmanagement_shared_server_groups_internal groups 
    INNER JOIN msdb.dbo.sysmanagement_shared_registered_servers_internal svr
      ON groups.server_group_id = svr.server_group_id
      WHERE groups.name like 'SQL2012') a" |
    foreach {
          $dataTable = Get-DBSpace $_.server_name
          if ($dataTable)
          { Write-DataTableToDatabase $dataTable 'db_space' }   }
     
    
  3. Schedule the data collection for the volumes and the databases

    计划卷和数据库的数据收集

    This can be done by Windows Task Scheduler or SQL Server Agent; (the two parameters that are required are the name of the SQL server and the name of the database which we created in the beginning)

    这可以通过Windows Task Scheduler或SQL Server Agent完成。 (所需的两个参数是SQL Server的名称和我们在一开始创建的数据库的名称)

    In my case I have created a SQL Server Agent job that uses the PowerShell scripts Write-DbSpaceToDb_CMS.ps1 and Write-VolToDb_CMS.ps1; The two parameters that are required are names of the destination server and database capacity_management where we will store the information.

    就我而言,我创建了一个SQL Server代理作业,该作业使用PowerShell脚本Write-DbSpaceToDb_CMS.ps1和Write-VolToDb_CMS.ps1; 必需的两个参数是目标服务器的名称和我们将在其中存储信息的数据库Capacity_management。

  4. Alternatively use SSIS to load data from external sources

    或者使用SSIS从外部源加载数据

    Considering that you do not have a direct connection to all of your SQL Servers you can use SSIS to load data from csv files for example. What I’ve done in my case is to gather data daily on each of my servers and upload it with ftp to a remote server. Then I download all the csv files to my CMS server and load them via SSIS; The SSIS package I use consist of several steps:

    考虑到您没有与所有SQL Server的直接连接,可以使用SSIS例如从csv文件加载数据。 在我的情况下,我要做的是每天在每台服务器上收集数据,并通过ftp将其上传到远程服务器。 然后,我将所有csv文件下载到我的CMS服务器并通过SSIS加载它们; 我使用的SSIS软件包包括以下几个步骤:

    • For each loop container that load all files into a temporary table

      对于将所有文件加载到临时表中的每个循环容器
    • For each loop container that moves the processed files into an archive folder

      对于将已处理文件移动到存档文件夹的每个循环容器
    • A SQL script that splits the data from the temp table into vol_space and db_space tables
      *hint: I am doing this for Oracle and MySQL servers.

      一个SQL脚本,用于将临时表中的数据拆分为vol_space和db_space表
      *提示:我正在为Oracle和MySQL服务器执行此操作。

  5. Configure data purity and data purge jobs

    配置数据纯度和数据清除作业

    I wanted to implement purge and data purity tasks in my solution in order to minimize the efforts supporting it. In the beginning I was storing data for the last 180 days; Now I store data for the last 730 days (for reporting purposes) but I use the last 180 days for calculation of the ‘days to live’ metric. Example on how to control the period will be discussed in just a moment.

    我想在解决方案中实施清除和数据纯净任务,以最大程度地减少支持它的工作量。 一开始,我存储了过去180天的数据; 现在,我存储了最近730天(出于报告目的)的数据,但是我使用了最近180天来计算“生存天数”指标。 稍后将讨论如何控制周期的示例。

    The data purge job is a simple SQL Agent job that I have it running every day. It consists of the following:

    数据清除作业是一个简单SQL Agent作业,我每天都要运行它。 它包含以下内容:

     
    use capacity_management
    go
    declare @730daysago datetime
    SET @730daysago = DATEADD(dd, -370, GETDATE())
    DELETE FROM capacity_management.dbo.db_space_capacity_sp WHERE dt <= @730daysago
     
    

    The data purity job is again a SQL Agent job that is running every day just before the report generation. I am deleting the data for the tempdb disks, quorum disks, OS root disks, and other non SQL related ones.

    数据纯度作业还是一个SQL Agent作业,在报表生成之前每天运行。 我正在删除tempdb磁盘,仲裁磁盘,操作系统根磁盘以及其他与SQL不相关的磁盘的数据。

Okay, now that we have everything configured let us go true the information that is being captured and the mechanism that is calculating the ‘days to live’ metric.

好的,现在我们已经完成了所有配置,让我们能够真实地捕获正在捕获的信息以及计算“生存天数”指标的机制。

First of all let us check what type of information I have collected in my capacity_management.dbo.vol_space table.
We need only a few columns to start the calculation:

首先,让我们检查一下我在Capacity_management.dbo.vol_space表中收集的信息类型。
我们只需要几列即可开始计算:

Now that we have this let us check what the view capacity_management.dbo.vol_space_change_vw is storing

现在我们有了这个,让我们检查一下capacity_management.dbo.vol_space_change_vw视图正在存储什么

Ok, we are somehow there. We have the average grow per day calculated.

好吧,我们在那里。 我们计算了每天的平均增长。

And last but not least the stored procedure that we will be using for the forecast capacity_management.dbo.vol_space_capacity_sp:

最后但并非最不重要的一点是,我们将用于预测Capacity_management.dbo.vol_space_capacity_sp的存储过程:

Now we have information about what was the total size and the used space of the volumes the in the first date (@beginDT), what are they at the current moment (@endDT). The stored procedure is able to calculate the average grow for each volume in our case; This is column avg_growth_gb; Once we have this information we are able to get the available space for the volume and divide it to the average grow per day resulting in the ‘days to live’ metric. In our case for disk S:\BeerStore045_Data007\ if the data on the disk continues to grow with the same rate it will take 34.11 days to fill it up.

现在,我们获得了有关第一个日期(@beginDT)中卷的总大小和已用空间的信息,当前时刻它们是什么(@endDT)。 在我们的例子中,存储过程能够计算每个卷的平均增长; 这是列avg_growth_gb; 获得这些信息后,我们便可以获取该卷的可用空间,并将其除以每天的平均增长量,从而得出“生存天数”指标。 对于磁盘S:\ BeerStore045_Data007 \,如果磁盘上的数据以相同的速率继续增长,则将需要34.11天的时间来填充它。

The same can be done for database objects using the stored procedure db_space_capacity_sp instead.

可以使用存储过程db_space_capacity_sp对数据库对象执行相同的操作。

I hope that you will find this useful in your day to day activities and will save you time and effort.
In the next article we will cover the build of the SSRS reports.

希望您在日常​​活动中能从中受益,并节省您的时间和精力。
在下一篇文章中,我们将介绍SSRS报告的构建。

翻译自: https://www.sqlshack.com/proactive-capacity-management/

xposed 主动调用方法

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值