Understanding System Statistics(zt)

Understanding System Statistics
by Jonathan Lewis

Understanding the optimizer's use of system statistics can make a big difference when migrating to a new version of the Oracle Database.

What is the most important component of the Oracle Database engine? My vote goes to the optimizer. Everything the database does is SQL, and every piece of SQL has to be translated into something that can work efficiently. Whatever you do with your data, the optimizer gets involved.

The Cost-Based Optimizer (CBO) is a subtle and complex piece of code, so when it first appeared in Oracle7 it wasn't too surprising that a few problems existed. However, it didn't take many patch releases before it was perfectly viable to implement a purely cost-based system. I recall setting up several such systems in 7.1, and at least one major system in 7.2.

Nevertheless, a surprising number of sites avoided the CBO for quite a long time—the reason being that many developers and DBAs were still thinking in a "rule-based" manner, and therefore doing things that make life difficult for the CBO. Basically, they were "lying" to their optimizer and then wondering why it made poor decisions.

I suspect the reason for my early success was that I happened to notice something that is now well understood: the fact that the CBO "likes" to do tablescans when the value for parameterdb_file_multiblock_read_countis big, and "prefers" to use indexes when the value is small. As we now know, a few other parameters display similar characteristics—that is, RBO liked them big but CBO needed them small. And, of course, there was the little detail that in a '"tie-break" the CBO will work down the list of tables in the from clauses, whereas the RBO works up that list.

CBO is very different from RBO — but there are just a few particularly critical differences that you must know about, because if you don't understand them and keep treating CBO as you would treat the RBO, you can give yourself a number of unnecessary problems.

History Repeats Itself

The CBO keeps evolving. Some releases of Oracle introduce minor enhancements, some releases introduce major enhancements, and some releases change the paradigm.

Unless you spot the major enhancements and paradigm shifts, you can end up fighting the CBO for years, trying to work around remembered weaknesses instead of playing to new strengths — just as many people did in the shift from RBO to CBO.

So are there any recent changes in the CBO that could have as much impact on our thinking as the change from the RBO? The answer is "yes."

In Oracle9iDatabase, Oracle added system statistics (which I rate as a new paradigm) and automatic workarea sizing (which I rate as a major enhancement). In Oracle Database 10g, we get various tuning advisors that optimize your developers' time by helping them produce more efficient SQL, indexing, materialized views or PL/SQL more quickly. We also get query profiles, a major enhancement that stores statistical information in the database for the purpose of helping the optimizer make better decisions. This information is particularly useful when you have to deal with untouchable third-party SQL.

Across both versions we get dynamic_sampling, a very useful feature introduced in 9ithat is particularly beneficial for data warehouse and decision support systems. But if you haven't paid any attention to dynamic sampling in 9i, it can become a nuisance if you're running an OLTP system on 10gbecause dynamic sampling is effectively enabled by default, and it's probably an unnecessary overhead.

The most important of all these changes is the introduction of system statistics. In fact, I would go so far as to say that one of the key steps in migrating from Oracle8 to Oracle9iis enabling system statistics and working through the impact they have on your system. Similarly, system statistics are really quite critical to getting the most out of the optimizer in 10g, and the best time to get familiar with system statistics is the moment you decide to migrate from 8i.

This feature is so significant that I'll devote the rest of this article to nothing else, saving dynamic_sampling and profiles as topics for another day.

System Statistics

Prior to Oracle9i, the CBO based its calculations on the number of I/O requests that would be needed to satisfy a query, using various constants to massage figures for tablescans and throwing in a few rules to account for things such as caching of small indexes. (See my DBAzine.com article "Why isn't Oracle using my index?" [for an introduction to this topic.)

Initially some of the assumptions built into the optimizer were a little naive, but as time passed assumptions were refined, algorithms improved, and new features implemented. However, the side effect of estimating I/O requests became a persistent limitation.

In 9i, Oracle introduced cpu_costing, a mechanism that allows the CPU cost of an operation to be included as part of the overall estimate. This feature is enabled in 9ionly if you collect system statistics; in 10g, it's enabled by default.

So what does cpu_costing do, and what are system statistics exactly? Let's start with system statistics, using a couple of calls to the dbms_stats package to demonstrate. (This example uses 9.2.0.4, and your account will need to be granted the rolegather_system_statisticsfor it to work.)

 
 

To see what you have done, you can query a table (owned by the SYS schema) called aux_stats$. After gathering system statistics, this table will contain a few critical numbers used by the new optimizer algorithms to calculate costs. (You have to flush the shared_pool to invalidate existing execution plans, though.) The following query will show you the current settings:

 
 

The exact list of results is version dependent (the code is still evolving, some versions of Oracle gather more statistics than others) but you will probably see something like this:

 
 

10galso introduces a few extra rows with values that are set as the database starts up:

 
 

I've quoted the SQL for convenience; in fact, the approved method for viewing this information is theget_system_statsprocedure in the dbms_stats package. There is also a set_system_stats procedure if you want to "adjust" the values without gathering them properly.

Tablescans

There are two significant changes that apply to the optimizer cost calculations when system statistics are available. You will note first thatsys.aux_stats$holds values for the following:

  • sreadtim: Average time for a single-block read request in milliseconds
  • mreadtim: Average time for a multi-block read request in milliseconds
  • MBRC: Average number of blocks in a multi-block read.

Using this information, Oracle can estimate how long it will take to do a tablescan (or index fast full scan). The arithmetic is easy: it's just the number of multi-block reads needed to do the scan, multiplied by the average time to do a multi-block read. Ignoring the minor changes due to automatic segment space management, we just take the high-water mark, and work from there:

Time to completion= mreadtim * HWM / MBRC.

Rather then reporting this "time to completion" as the cost of the query, Oracle restates the time in terms of the equivalent number of single block reads. To do this, simply divide the time to completion by the average time for a single-block read.

Cost = time to completion/ sreadtim

Or, putting the two formulae together and rearranging terms:

Cost of tablescan= (HWM / MBRC) * (mreadtim / sreadtim)

From this example, you can see that the cost of a query is the time to completion of a query, but expressed in units of single block reads rather than in proper time units.

When you start using system statistics, the optimizer automatically starts to be more "sensible" when choosing between tablescans and indexed access paths because the cost of the multiblock reads used for tablescans will include a proper and appropriate time component.

Historically, the cost of a tablescan was simply:

Cost of tablescan= HWM /(modifieddb_file_multiblock_read_count).

This formula made little allowance for the fact that your choice of value for the parameterdb_file_multiblock_read_countcould be unrealistic, nor did it allow for the extra time that an extremely largedb_file_multiblock_read_countwould take compared to a single block read.

This weakness is largely why Oracle created theoptimizer_index_cost_adjparameter in 8.1.6 to allow you to introduce a factor that was similar in intent to themreadtimthat you collect in system statistics. (You may have spotted the similarity between themreadtim/sreadtimelement in the new cost formula, and the common method for estimating a sensibleoptimizer_index_cost_adj.) But there are some unexpected side effects to using theoptimizer_index_cost_adjparameter that can cause problems and the mechanisms that come into play when you start using system statistics are much more robust.

It is still meaningful, by the way, to useoptimizer_index_cost_adjas a clue to table caching effects (specifically, what percentage of single block table reads are likely to turn into real read requests) even when using system statistics. There are some indications in 10g, though, that even this clue will become unnecessary in the not too distant future.

CPU Costs

System statistics do more than correct for the I/O and time trade-off between single-block and multi-block reads. They also cater for two further enhancements (or corrections) to costing: first, Oracle can be even better at balancing tablescans against indexed access paths; second, Oracle can be smart about rearranging predicate order.

Note how the statistics include the apparent CPU speed, nominally in MHz. Don't be alarmed if this is nothing like the actual CPU speed of your system — the figure is probably just an internal calibration of a baseline operation that Oracle uses to produce relative CPU costs of other operations. On one machine running at 2.8GHz, I typically come up with an apparent CPU speed of a few hundred MHz. (Bear in mind that what you see is the nominal speed of a single CPU, not the sum of all the CPUs in a multi-CPU system.)

So why does it help the optimizer to know the (apparent) speed of your CPU? Consider an example where you have a choice:

  • Option 1: use an index on a simple date column to find 20 scattered rows in a table.
  • Option 2: use a tablescan to examine every row in the table, checking every single row in the table to see if the date column falls in the correct range.

Oracle may decide, based purely on the number and speed of single-block and multi-block reads, that a tablescan would be quicker. But how much CPU will this take if the tablescan requires a test like the following on 10,000 rows:

 
 

CPU operations take time as well, and if the number and nature of the tests that have to be performed on a tablescan require a lot of CPU, Oracle factors this cost into the equation and might switch a query from a CPU-intensive tablescan to an index range scan. You can see this from the formula in theOracle9i Database Performance Tuning Guide and Reference(A96533 p. 9-22):

 
 

The#CPUCyclesvalue in this equation is visible in the cpu_cost column of the newer versions of the plan_table used by the Explain Plan facility. (Here's another small, but important, detail of optimizer enhancements: always check to see how the explain plan facility has evolved.)

In fact, if you use this column in the cost equation, the formula needs a fudge factor thrown in—cpuspeed is recorded in MHz, and the other timings are given in milliseconds, so the CPU component of the formula looks as if it should be adjusted by a factor of 1,000:

 
 

Predicate Order

Apart from the high-level choices, knowledge of CPU operations and the complexity of predicates can allow Oracle to do things that you might never consider in a manual tuning exercise. The best demonstration of this principle comes from a (slightly contrived) worked example. For the purposes of a repeatable test, the following code ran under 9.2.0.4 and used a locally managed tablespace with a uniform. extent size of 1MB, and manual segment space allocation.

 
 

Based on this data set, here are two virtually identical queries. Which one of them will be faster? The index hint is there just in case your test database has some unexpected parameter settings that push the optimizer into doing a tablescan:

 
 

Notice that the only difference between the two queries is the order of the predicates. If you run the queries through autotrace (set autotrace on), you'll find that they both produce the same plan and the same number of consistent gets to execute.

 
 

So when I put each query in turn into a PL/SQL loop that executed it 10,000 times, why did the first query take 4.34 CPU seconds to run 10,000 times (at 2.8GHz) and the second query take 13.42 CPU seconds if system statistics were not enabled? And why, when I enabled system statistics, did both queries run in the shorter time?

The answer isn't visible in autotrace, but if you run the two queries through Oracle's dbms_xplan package to get the full execution plan, you see the following for thefasterquery when system statistics are not enabled:

 
 

If you swap the order of the predicates (still without system statistics) to check theslowerquery, you will spot a small change in the Predicate Information section:

 
 

However, when you enable system statistics (specifically it's the cpu_costing component that counts), then it doesn't matter which way round you write the predicates in your query—the execution plan shows that thefilter()line operates the numeric check before the date test, and the query runs in the shorter time.

There are two important points to address here. First, why is there a significant difference in the run time, and second, what was Oracle doing when it found the quicker path automatically?

The difference in run time depends on the fact that I engineered the data set and the query to force Oracle to check both columns (d1 and n1) for every index entry covered by the index range scan, and the specific predicate values required Oracle to check 1,560 rows. Of those 1,560 rows, every single one will pass the date test, but only one will pass the numeric test. So, by switching the order of the predicates manually, I was actually choosing between executing:

1,560 date tests that pass, followed by 1,560 numeric tests

and

1,560 numeric tests of which one passes, and one subsequent date test.

The difference in run time is entirely due to the absence of 1,559 date tests (multiplied by the 10,000 iterations of the loop).

How did Oracle find the optimal order of predicate evaluation when cpu_costing was enabled? First, as part of its "traditional" optimization mechanism, the optimizer always works out the selectivity (fraction of rows that will be returned) for each of the predicates on a table. Second, Oracle keeps an internal reference list of basic operations with their costs in the form. "operation X requires N cpu units," and this list is calibrated against the cpuspeed element of system statistics.

Combining these two sets of figures, the optimizer can work out figures for "number of tests required for this predicate * CPU cost of test" and "number of rows for next predicate because of this predicate" as it re-arranges the order of predicates—all it has to do is minimize the total CPU cost across all tests.

In summary, when CPU costing is enabled, a new code path comes into play that may improve the performance of some of your queries without affecting the visible execution path, and you will only understand what's going on if you check the new explain plan output. (This re-arrangement of predicate order can be stopped by using the hint /*+ ordered_predicates */, but the hint is deprecated in 10g.)

Conclusion

Although system statistics haven't yet achieved much popularity, they are not only important overall but actually critical to certain optimization options in 10g. System statistics give the optimizer more of the "truth" about how your system really performs, and therefore allow the optimizer to produce a better match between estimated and actual query execution time.

System statistics also include details that allow the optimizer to trade I/O costs against CPU costs. At a gross level, knowledge of CPU speeds allow even better decisions on execution paths; at a finer level of detail, Oracle even allows for re-arranging the order of predicate testing to minimize CPU time after the path has been decided.

If you are still running 8iand plan to migrate to Oracle9i, you should include system statistics as part of your migration plan. If you are running 9iwithout using system statistics, you should enable them as soon as possible. If you are migrating to 10gwithout first implementing system statistics on 9i, then make some allowances for investigating system statistics in your test plans. If you don't, you may spend a lot of time trying to understand why odd things are happening in those places where you've put in a workaround for a problem that wouldn't have existed if you had been doing the right thing in the first place.

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/756652/viewspace-434830/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/756652/viewspace-434830/

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器学习模型机器
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值